Military Embedded Systems

AI in safety-certification efforts evolving

Other

March 18, 2019

Mariana Iriarte

Technology Editor

Military Embedded Systems

Artificial intelligence (AI) has the potential to change the way technology progresses, if created and developed properly to fit the needs of the industry. For avionics suppliers, it’s a tricky field, with years of research and development (R&D) ahead of them to develop AI solutions focused on easing the safety-certification process.

“I believe artificial intelligence will be like splitting the atom 70 years ago: shown to be both a powerful menace but also an extraordinary new resource,” comments Vance Hilderman, chief technical officer, AFuzion Inc. (Los Angeles, California).

“I have read about the use of these [AI technologies] in cybersecurity and IoT [Internet of Things] and certain other systems that have a high level of connectivity, but not so much used in the avionics industry, where DO-178C defines the processes whereby a system is certified,” says Gary Gilliland, technical marketing manager, DDC-I, Inc. (Phoenix, Arizona). “In high design assurance systems, all resources including I/O are predefined before the system boots. For most of these the systems, the idea of learning and adapting to change is not really an option.”

The aviation community has spent decades building standard practices to address any and all safety concerns with critical systems flying in national airspace. Today, “AI or machine learning is, for a safety-critical system, in its early stages and not playing a significant role in the commercial aviation safety-certification process,” points out George Graves, chief technologist, Mercury Mission Systems, a unit of Mercury Systems (Andover, Massachusetts).

“To date, AI is under informal consideration within the worldwide certification community and we expect recommendations and advisories by 2020,” Hilderman comments. “Deep Learning (true “AI”) won’t be initially certified, however predictive AI which uses bounded limit frameworks and real-time external monitors will be the starting point.”

To that point, “If we use the broader definition of AI or machine learning, there are examples of benefit provided by systems in the form of voice recognition for the cockpit,” Graves adds. “There are also examples of rule-based systems and inference engines being utilized in examining code for specific certification of software technologies items in the form of tools.”

While there are not large-scale public efforts where AI is playing a significant role in the certification process, Graves says, the idea is that “as deep learning systems become better understood, they will inevitably be considered for inclusion in safety critical systems. It is likely that some type of AI certification will become necessary in order to make the system complexity manageable, especially in regard to understanding and documenting the system. This is an area that is the focus of significant research.”

In addition to the extensive research and development needed, risk assessment is paramount in using an emerging technology for safety-critical systems.

“The key is in minimizing the dangers while leveraging the capability,” Hilderman says. “The large airframers have wisely formed and group-funded an industry consortium which is today working toward adoptable frameworks, with deep learning via neural networks seeming to hold the greatest promise, but also risk.”

For example, large airframers such as the Boeing Group are making sizeable key investments in artificial intelligence and partnering with companies like Sparkconition to develop an AI- and blockchain-powered airspace management software platform.

Also in the works is Boeing’s new business units: Boeing Horizon X and Boeing Next. The first invests in starts-ups and finding new partners and Boeing Next focuses on developing new products, according to MIT Technology Review.

Understanding the risks associated with the technology also leads engineers to ask the ultimate question: Can artificial intelligence really predict the outcome of a safety-critical aviation system?

“Aviation safety is predicated upon verifying that anything which could happen in flight has been considered and tested,” Hilderman adds. “However, true AI via machine-learning implies real-time software evolution which cannot be perfectly predicted and verified in advance; this greatly complicates certification. I believe the answer lies in specifying machine learning boundaries combined with real-time monitoring and validation of the machine’s new solution. We’ll soon see practical ground-based AI for scheduling and routing and within a few years an introduction of simpler airborne AI; then it gets very interesting.”

So it’ll be limited AI at first,” he clarifies. “After we deploy that first on ground-based systems (which are typically less critical than airborne) and refine them, we’ll deploy it to airborne; I predict by 2022 – 2024. Then we’ll explore Deep Learning (true neural nets et al) post 2025.  By that time, we should have more lessons-learned and proven frameworks from industries such as personal transport, e.g. automobiles which will have deployed more AI by then.”

It will be a few years still before users experience simpler forms of AI. For the time being, “Various forms of AI, such as machine learning can also have a role in making the certification process easier and less costly,” says Ray Petty, vice president, Aerospace & Defense, Wind River (Alameda, California). “Tool vendors implement AI and machine learning techniques to help identify risk areas at both the system/architectural level as well as at the code/binary level. These techniques are generally assistive rather than generative and so can help make the process more efficient and robust without the risk of introducing faults or errors.”

There are viable options to utilize the technology and speed up the certification process. “AI, by its nature, does not currently produce code that is deterministic; in other words, the operations within the ‘AI engine’ cannot be analyzed using standard software safety processes that have been proven and used for traditional software applications,” Petty adds.

He also points out three practical ways to get AI up and running for the safety certification process, “So, you have a few choices: Isolate the AI in a non-safety partition and have safety monitors that take the AI output and check to see if they compromise the system, change the safety process to allow for non-deterministic code, or change the AI to have deterministic behavior.

“All of these are research areas, but you could implement AI today in a nonsafety partition on a safety hypervisor. For example, the Helix virtualization platform (Figure 1) has both flexible and fixed partitioning, allowing less critical Linux-based frameworks such as machine learning to run side by side with highly regulated safety critical applications,” Petty states.


21
Figure 1: Helix virtualization platform. Photo courtesy of Wind River. 

(Click graphic to zoom)

 

“In addition, the FAA’s Overarching Properties program is exploring methods of certification that may be appropriate for approval of AI and machine learning solutions, provided that they can satisfy the properties,” Petty continues. “This benefits from ongoing research efforts to create AI solutions that are deterministic.”