Digital avionics displays from the cockpit to the helmet to the holographStory
March 09, 2021
The digital cockpits of military aircraft today have increased in complexity and capability by leveraging commercial processing, graphics, and navigation in open architecture designs, bringing unprecedented awareness and advantages to military pilots.
When glass cockpits replaced the traditional dashboard of gauges and dials in older flight decks, pilots couldn’t stop gushing about the improved situational awareness the digitization of their instruments provided. Today’s advances, while more subtle, are delivering similar jumps in capability for flight and helmet displays via improved flight computer processing, high-resolution display graphics, and holographic near-eye displays. These solutions are also happening at a faster technology-insertion rate, thanks to open architecture designs and initiatives.
Enhancing cockpit displays
Cockpit displays – like many other defense electronics solutions and products – must meet reduced size, weight, power, and cost (SWaP-C) requirements in addition to providing improved capability, all while maintaining compatibility with legacy systems.
“We receive varying customer requirements,” says Luis Esparza, product manager for Abaco Systems (Huntsville, Alabama). “In general, display computers and processors rely on ensuring large central processing unit (CPU) processing in conjunction with strong graphics processing unit (GPU) processing. [We] ensure the power and interoperability are available out of the box [with] multicore, multi-Gigaherz, multi-teraflops floating-point processing, and enough memory to make it all work.”
There’s also an expected level of bus connectivity, “whether multi-Gig, Ethernet, or legacy, such as ARINC and Mil Standard 1553 avionics,” he adds.
Some of these requirements also drove an upgrade to an existing side head-up display platform in AC-130J gunships. The biggest challenge involved was that the customer required pin compatibility with the legacy system, says Rob Cox, regional sales manager for Abaco Systems, which supplied its MAGIC1A high-performance embedded computing system for the upgrade, which aims to enable operational visibility of the battlespace for the platform.
MAGIC1A provides features including increased storage space for mission and flight data, higher processing capacity, and cybersecurity capabilities.
“MAGIC1A delivers the latest in graphics and computer processing to a SWaP-C improvement on a legacy design to ensure seamless integration,” Cox says. “It will allow the customer to reduce the technology footprint on the platform via a [serial digital interface] SDI I/O upgrade on the system.”
Adding a removable 4 terabyte solid-state device (SSD) is “expected to enhance the operational security posture for tactical field operations,” Cox adds. Cybersecurity is enabled via Intel’s trusted platform monitor.
Like cybersecurity, artificial intelligence (AI) capability is being enabled across multiple defense platforms and electronics solutions. For rugged display applications, Abaco leverages the NVIDIA Deep Learning SDK and Intel’s OpenVINO toolkit, which “enable our customers to easily create high-performance applications to leverage AI inference at the edge,” says Dave Tetley, principal software engineer for Abaco Systems. “AI-based sensor processing applications such as target recognition and tracking can be more effective than traditional techniques and are becoming widely adopted as more compute power is provided within a low size, weight, and power profile.”
Technology refreshes that involve features like AI capabilities, processors, rugged displays, and cyberdefense are now faster and more efficient thanks to open architecture designs and initiatives like the Future Airborne Capability Environment (FACE).
Open systems architecture (OSA) is enabled via the use of standard interfaces in hardware and common application programming interfaces (APIs) in software. [Note: The latter is how the FACE Technical Standard enables commonality and reuse in avionics software. For more, please see our Industry Spotlight articles starting on page 30.]
“The idea of having reuse and being able to leverage a hardware abstraction layer is critical,” says Steve Motter, vice president of business development for IEE (Van Nuys, California). “We have been using the FACE Technical Standard as a design guide in our MFD implementation.
“We support OSA in our avionics displays via a communication interface and video processing capabilities,” he continues. “This includes everything from traditional avionics buses, to ARINC 818, to Ethernet-based video distribution architectures such as ARINC 661.”
An example is IEE’s 3.5-inch aircraft control display unit (CDU) designed for helicopter avionics. Applied beyond typical radio and communications applications, this CDU provides a central data entry and status display for the search-and-rescue rotorcraft platform’s Personal Locator System (PLS).
Helmet-mounted digital night-vision display
While modern digital flight displays have made pilots’ working days much easier, the technology in helmet-mounted displays enables pilots to see in all types of conditions. An enhanced visual acuity (EVA) system from Collins Aerospace (Charlotte, North Carolina) is helping the U.S. Navy and Marine Corps transition from analog to digital night-vision systems. This system will be used by rotary-wing and tilt-rotor aircrews to provide advanced digital night-vision and display technology to enhance situational awareness for warfighters.
EVA integrates a helmet-mounted binocular display for wider, higher-resolution imagery and improved night-vision performance at very low light levels, which is when rotary-wing pilots need it most. (Figure 1.)
[Figure 1 | The EVA system from Collins Aerospace is a helmet-mounted digital night-vision display. Image courtesy of Collins Aerospace.]
“All of the digital night-vision processing for our EVA system is hosted on the helmet within the EVA electronics assembly, which makes use of the latest multiprocessor system-on-chip (MPSoC) technology to enable high-performance, low-power processing,” says Michael A. Ropers, principal systems engineer, Helmet Vision Systems, Avionics for Collins Aerospace.
EVA represents the next technology leap in aviator night-vision systems, according to Collins Aerospace, taking that next step by providing “the visual acuity of analog night-vision goggles with a larger field of view and full color binocular heads-up display symbology,” Ropers explains. “And it replaces dated monochrome monocular displays on the NAVAIR rotary-wing HMDS [helmet-mounted display system] with the latest binocular color displays.”
The system uses the ISIE-19 night-vision sensor for low-light performance, combined with the displays. EVA is noticeably lightweight and has both high contrast and a large field of view. “It’s paired with a full-color high-brightness microdisplay, and also offers a substantial improvement in visual acuity, higher brightness, and lower life cycle costs over previous rotary-wing helmet display systems,” Ropers notes.
One of the most surprising aspects of EVA is that, beyond its displays, “its lightweight, flexible night-vision solution is operational whether in-line-of-sight or stowed above the eye,” he continues. “A seamless transition from day to night operations for our pilots was key in the integrated design of the night-vision sensor and display, allowing for day/night operations without the need to install or remove components from the helmet. But the night-vision sensor assembly can be quickly removed with minimal effort if the pilot desires.”
Work under a developmental contract with the U.S. Navy and Marine Corps is underway at Collins Aerospace facilities in Iowa, California, and Massachusetts, and with contract completion scheduled for March 2023.
Holographic near-eye displays
The next leap in military display technology may come from research in holographic near-eye displays via new software and hardware advances. A new technique to improve image quality and contrast for holographic displays was developed by researchers from NVIDIA (Santa Clara, California) and Stanford University; the technique may help improve near-eye displays for augmented- and virtual-reality applications.
Augmented- and virtual-reality systems are poised to have a transformative impact on our society by providing a seamless interface between a user and a digital world, according to Jonghyum Kim, a researcher at NVIDIA and Stanford University.
“Holographic displays could overcome some of the biggest remaining challenges for these systems by improving the user experience and enabling more compact devices,” Kim says.
The new holographic display technology is called “Michelson holography,” which the researchers reported in Optica, an open-access journal from the Optical Society of America. The technology combines an optical setup inspired by Michelson interferometry (used in spectroscopy and wave detection) with a recent software development to generate interface patterns to make digital holograms. (Figure 2.)
[Figure 2 | Michelson holography shows significant improvements in image quality, contrast, and speckle reduction compared with all other conventional methods, such as naïve SGD [stochastic gradient descent]. Photo credit: Jonghyun Kim, NVIDIA, Stanford University.]
“Although we’ve recently seen tremendous progress in machine-learning-driven computer-generated holography, these algorithms are fundamentally limited by the underlying hardware,” Kim says. “We codesigned a new hardware configuration and a new algorithm to overcome some of these limitations and demonstrate state-of-the-art results.”
Holographic displays show potential for outperforming other 3D display technologies used for augmented and virtual reality by enabling more compact displays. It improves a user’s ability to focus their eyes at different distances and offers the ability for contact-lens wearers to make adjustments. But so far, the technology hasn’t achieved the image quality of more conventional technologies.
Image quality of holographic displays is limited by an optical component called a phase-only spatial light modulator (SLM). Phase-only SLMs that tend to be used for holography have a low diffraction efficiency that degrades observed image quality, particularly image contrast.
It’s difficult to dramatically increase the diffraction efficiency of SLMs, so the researchers designed a completely new optical architecture to create holographic images. Michelson holography uses two phase-only SLMs rather than using a single phase-only SLM like other setups.
“The core idea of Michelson holography is to destructively interfere with the diffracted light of one SLM using the undiffracted light of the other,” Kim says. “This allows the undiffracted light to contribute to forming the image rather than creating speckles and other artifacts.”
The researchers combined this new hardware arrangement with a camera-in-the-loop (CITL) optimization procedure modified for an optical setup, a computational approach to optimize a hologram directly or to train a computer model based on a neural network.
CITL allowed the researchers to use a camera to capture a series of displayed images. It also allowed for correction of small misalignments of the optical system without using any precise measuring devices.
Once the computer model is trained, “it can be used to precisely figure out what a captured image would look like without physically capturing it,” Kim points out. “This means the entire optical setup can be simulated in the cloud to perform real-time interference of computationally heavy problems with parallel computing. This could be useful, for example, to calculate a computer-generated hologram for a complicated 3D scene.”
The researchers put their new Michelson holography architecture to the test using a benchtop optical setup within their lab to display several 2D and 3D images, which were recorded via a conventional camera. This demonstration showed that the dual-SML holographic display with CITL calibration provides significantly better image quality than existing computer-generated hologram approaches.
To make their new system practical, the researchers say they need to first translate the benchtop setup into a system small enough to incorporate into a wearable augmented- or virtual-reality system. And they note that their approach of codesigning the software and hardware may be useful for improving other applications of computational displays and computational imaging in general.
The researchers received funding from the Army Research Office, Okawa Foundation for Information and Telecommunications, Alfred P. Sloan Foundation, National Science Foundation, and Ford Foundation. [Link to article: https://www.osapublishing.org/optica/fulltext.cfm?uri=optica-8-2-143&id=446984.]