Helo flown via synthetic vision for the first time
In an experiment reminiscent of Jimmy Doolittle’s trailblazing instrument blind flight in 1929, researchers at Canada’s National Research Council (NRC) have conducted a full takeoff and landing flight of their testbed fly-by-wire Bell 205 helicopter controlled by a pilot completely “under the hood” and receiving all his visual cues via a helmet-mounted enhanced synthetic vision system (ESVS). By combining video and computer-generated input, the device presents aviators with a sense of situational awareness by integrating electronic visual input with helmet-mounted sensors, producing a more-or-less natural “picture” of the outside world that changes aspect as the subject changes head position. The NRC pilot, Guy Ramphal, flew the synthetic vision mission through takeoff, hover, cruise and landing. Others, all accompanied by a safety pilot controlling the Bell 205 via conventional controls and using unaugmented eyeballs, have followed in Ramphal’s wake.
To make possible the world’s first complete artificial-vision mission, the NRC’s Bell 205 used several control augmentation systems, among them attitude command and height and heading hold. Making use of what researchers freely concede to be first-generation, relatively low-resolution graphics, pilots were able to navigate the aircraft to within two meters of a desired location using only artificial terrain, sensor and navigation inputs. The view as seen by pilots more closely resembles a monochromatic schematic of the ground, instead of the natural “trees, rivers and cows in the field” type of resolution provided by large ground-based simulators. Nevertheless, grosser ground features and terrain variations are shown in a convincing, real-world way. Despite the resolution limitations, “the helmet-mounted graphics produced in the ESVS are superior to head-up displays and panel-mounted graphics,” reported NRC team leader and head engineer Sion Jennings. “That’s because the ESVS is able to display the virtual terrain well to the sides of the aircraft, not just straight ahead. The helmet-mounted sight has a unique ‘look-around’ capability.”
NRC researcher Greg Craig admitted to ESVS drawbacks attributable to the limited capabilities of the graphics computer driving the vision system, as well as the state of the current synthetic vision arts. “The display combines an outside TV camera image and internally derived imagery from the computer. Sometimes there’s a slight lag between the two that can be distracting,” Craig admitted. “There have also been problems with the helmet rate sensors. These sensors attach to the pilot’s helmet and detect head movement, feeding that data to the display so that it mimics the changes to the pilot’s synthetic point of view. The problem has been that a helicopter cockpit vibrates quite a bit while in flight, throwing off those sensors, which can be distracting.”
The synthetic terrain displayed is based on predetermined digital terrain data stored in the computer. For the synthetic system to work, it must fly over terrain that has already been mapped in this way. Such mapping techniques have yet to record the undulations of the entire planet, so the areas over which a synthetic vision system can be used are still somewhat limited.
Craig predicts a better, more capable system in anywhere from five to seven years, depending on the R&D funding flows. “Our present helmet system is a somewhat crude prototype. So we have to build a better helmet and we have to run it with more computer power.”
Team leader Jennings agrees. “The eventual ESVS success will be based on four advanced technologies,” he said. “The navigation system will, like this early system, take its navigation data from differential GPS and an inertial navigation system. The helmet-mounted display will have a wider field of view, low distortion, be robust and lightweight. And finally, the image generation system will be driven by small computers that generate graphics very quickly.”
Timetable for such a system? “A second test version in five to seven years,” Craig predicted. “A working full-up version in 10 to 15 years.”