Sometime in the last century I was involved with the Defense Mapping Agency in St. Louis MO as an employee for a private contractor company that did highly classified work in support of the intelligence and reconnaissance community. One the DMA's missions (maybe it's only mission) is to provide very accurate digital maps for navigation and targeting by our defense forces. We won't go into how they acquire the imagery to make these maps, but the imagery is in the form of stereo pairs photographed at various altitudes.
In the later part of the 20th century digital terrain elevation data (DTED) was acquired by a human being who "flew" a dot over the imagery by peering into a pair of ocular eye pieces while attempting to keep the dot on the terrain, neither above nor below the ground. He or she moved the stereo-fused dot image in elevation using a foot-wheel that varied the optical separation (parallax) of the stereo image-pair, while the stereo viewer they were using to view the imagery moved the terrain images in a series of parallel straight lines. The "height" of the dot above the terrain was periodically recorded as a digital datum.
Variations of this type of machine, called
digital stereo compilers, have been used for decades by cartographers to extract elevation data from stereo aerial photographs. Depending on known (measured) characteristics of camera optics and the altitude at which the images are acquired, height errors of the digital terrain elevation data (DTED) can be as little as a few inches with a highly skilled operator.
Operating a compiler is not an easy task. It takes a special kind of person to sit and peer through a stereo microscope for hours while "flying" the stereo-fused dot over (and on) the terrain. Much effort has gone into automating this task using computers to search the two image fields for corresponding points on the ground. Once a few of these points are found on the two images, the parallax that represents the height of other points in the image pairs can be calculated or interperlated from the known pairs of image points. This allows a DTED set to be built that
models the actual terrain that was photographed.
But how accurate is the data set? The job that our company was involved in was to propose and build a "drop-in" add-on to DMA's existing stereo compilers (they have lots of them) that would superimpose (optically) the DTED data (displayed on a high-resolution, raster-scanned CRT) over the raw photographic images. The hope was this would visually reveal any glaring errors in the DTED. Our company had already delivered a set of work-stations that DMA personnel could use to view and edit DTED data, but this new approach sought to directly compare (using the human eye-brain synergy) DTED with the ground imagery from which it was compiled.
All this occurred in the late 1980s, when the microprocessor revolution was just beginning to replace dumb terminals, communicating with "Big Iron" mainframes, on desktops throughout the defense establishment. At that time they still held on to the idea that a central processor was necessary, so the smart terminals that replaced the dumb terminals didn't do any serious processing.
My company was a DEC (Digital Equipment Corporation) house using PDP-11 minicomputers and VAX-11/750 mainframes. Management either didn't see or failed to acknowledge the coming microprocessor revolution. They held fast to the belief that microprocessors, as typified by the IBM PC, were just "toy" computers with no real future. IBM must have thought so too because they got out of the microprocessor-based personal computer business. Fast forward twenty years to the 21st century where microprocessors are now dirt cheap and everywhere. A few people did see that coming. And a few others see where the human-machine-interface (HMI) is going, although very few will see just how far it will eventually evolve. We live in "interesting times," as the old Chinese curse goes.
I am actually just in the very early stages of developing my science fair project an I'm no engineering genius the most I can make is a very rudimentary proximity sensor with a warning distance about twelve inches.
A successful science fair project is typically 90% research and 10% presentation of the results of that research. A working prototype is nice to have, but not necessary if your presentation can simulate it or describe what is necessary to build it. Think animated audio-video presentations, fairly easy to do with software available today. But you have to have something to say!
Your science fair project could exploit the idea of 3D image processing as an aid for the visually impaired by describing exactly what is necessary to get there. What resolution do the CCD image sensors need, for example, to distinguish between a book left on the floor and a sleeping cat? What resolution is required to create enough parallax to determine range to objects up to five, ten, twenty feet or more away? What kind of software is needed to locate corresponding points in the two images
in real time for range processing? What kind of image recognition software is needed to distinguish between cats and books? If given all the data you need from the processed stereo pairs, how do you present the analysis of this data to a visually impaired person quickly and unambiguously so as to guide their movements?
In my previous post # 3, I suggested a binaural stereo approach for the HMI, but others have also suggested haptic and other sensory input channels that might be less expensive or easier to implement. One the earliest researches into artificial vision used an array of vibrating reeds, strapped to the wearer's naked back, to convey a crude image of what a single video camera viewed. Such devices, in smaller form, might already be familiar to blind Braille readers, so a hand-held Braille transducer might be an appropriate HMI.
Remember, the purpose of a Science Fair is to demonstrate that you know how to do science. A gee-whiz prototype may look cool, but you should also show that you have invested some thought and time in research to solve whatever problem you think your prototype solves. It may turn out that the most promising solutions are waaay out of your limited time and budget constraints. That doesn't mean you should abandon them. It does mean you need to describe the difficulties that impede their implementation
at this time.