In the mid-1960s I was working at the Air Force Weapons Laboratory in Albuquerque, NM trying to build a Shrack-Hartmann Plate wavefront analyzer for infrared weapons-grade lasers. This required sampling the output laser beam with a beam sampler, sending the sampled beam to the wavefront analyzer, and then using the wavefront analyzer outputs to control deformable sections of a mirror located in the resonant cavity of the laser.
Someone writing his graduate thesis at university thought they had discovered that a small insulating substrate, coated with a thin bismuth film, could be used with four op-amps to locate the centroid intensity position of an infrared laser beam as a pair of analog x-y coordinates. From this humble beginning the laboratory tried to construct a wavefront analyzer consisting of about sixteen or so plano-convex zinc selenide infrared lens arranged in a hexagonal array that focused the sampled laser beam onto individual x-y detectors, each with four op-amps wired in near the detectors.
Deviations of the focused beamlets from the centers of the detectors was a measure of the wavefront "tilt" at that location. With some heavy-duty mathematics and some hand waving, the detector outputs could be placed in a negative feedback loop controlling a deformable mirror located in the laser cavity, thereby correcting for the wavefront tilt and making a better plane wave for imaging onto a target. I have no idea whether this actually worked for weapons-grade lasers or not, but it does work for astronomers by using "guide stars" instead of a laser to "clean up" atmospheric distortions and provide better "seeing" with their telescopes.
Variations on this scheme have been reported in the open literature for years, but AFAIK there has never been a market for large-format x-y position-sensing arrays... except perhaps in the astronomical community where they find use in mapping star positions. For real-time position sensing it is hard to beat an ordinary CCD camera, but you are on your own in converting the video pixels into x-y co-ordinates. Photosensitive plates used by astronomers have very high resolutions and very large number of pixels, but they are quite expensive. You need to carefully specify exactly what you are trying to DO before spending a lot of money pursuing this path. You might also want to investigate the field of photogrammetry which is devoted to extracting precise data from imagery.
Wow! Thank you Sir Hevans1944, it has been an absolute pleasure to know your insights. My heartfelt thank you for still sharing your expertise.
I have decided to go with the NOIR camera in Raspberry Pi and to used an IR laser, I would put an old film roll over it which would only pass IR light into it. I would get the X, Y and Z after calibrating it, there are some resources on how to do that. The system need not be that accurate for my project.
This IR led tracking is already done with normal webcam after removing the IR filter, This is the link of the freeware, free track
https://www.free-track.net/english/freetrack/comment-ca-marche.php
It not only tracks the head orientation, but also gives out the position.
I am thinking of developing this from scratch, to find X,Y and depth by pixel size of the laser beam on the plate. I would find the position and orientation of the Plate by attaching IR LED's over them. Now That I know the position and orientation of the plate, I can track the IR laser position by subtracting with the previous image. Now I can compute the X, Y position of the laser w.r.t the plate position.
The reason I was reluctant using a Raspberry Pi and NOIR camera is the cost. One of my requirement is to make it as cost efficient as I can, I was exploring every way and hardware seemed easier to do initially, but it's not. The RPI with NOIR is easier and cost efficient as far as I know now.
Keep up the good work Sir,
Have a good day
Tim