The avionics industry’s growing need for TLMStory
May 04, 2023
Avionics systems increasingly leverage FPGAs [field-programmable gate arrays] and SoC [system-on-chip] FPGAs with high-speed interfaces such as PCIe and Ethernet to deliver greater performance and reliable connectivity for military and civil aviation. However, if the underlying FPGA design needs to demonstrate development assurance based on DO-254/ED-80 (documents providing guidance for the development of airborne electronic hardware), verification becomes very challenging. In these cases, transaction-level modeling (TLM) may be the answer.
The ubiquity and standardization of PCIe, Ethernet, and other serial high-speed interfaces – plus their availability within FPGAs [field-programmable gate arrays] and SoC [system-on-chip] and FPGAs as embedded hard IP – made them very popular in military avionics and aerospace, and for their use in supporting safety-critical functionality. In addition, FPGA vendors provided some great development tools for device configuration and integration.
All are of considerable benefit to design engineers. However, the increasing use of multiple high-speed serial interfaces in safety-critical applications comes at a price. For certification purposes it must be proved that the devices function as intended and with high reliability. Unfortunately, that is difficult to do for three reasons:
• Physical (in-hardware) test of the FPGA with the high-speed interfaces in the target circuit board can produce nondeterministic responses.
• There is a lack of FPGA input controllability and output visibility.
• The avionics industry struggles to adopt the appropriate verification techniques and methodologies as quickly as the commercial sector, for example, which can lead to significant project delays and costs.
Simulation is an important activity in the verification process. However, the Design Assurance Guidance for Air-borne Electronics Hardware (RTCA DO-254/ED-80) states that simulation performs only an analysis, since simulation uses models and the simulated environment is always ideal.
This fact regarding “ideal environment” becomes obvious when something like an FPGA with an embedded PCIe block (Figure 1) is considered. Internally, the FPGA fabric (where the functions designed into the programmable logic reside), communicates with the PCIe block via an AXI bus.
[Figure 1 | Shown: A PCIe embedded block within an FPGA.]
In many situations, simplified BFMs [bus functional models] can be used for simulation purposes. Alternatively, the entire PCIe block is skipped and only the AXI interface is available during the simulation.
RTCA DO-254/ED-80 (section 6.3.1) guidance states that real hardware must be tested in its intended operational environment. The standard test approach is to conduct board-level testing in the laboratory with the use of specialized equipment such as test-vector generators, logic analyzers and oscilloscopes.
For today’s level of integration and complexity, board-level testing does not allow all FPGA-level requirements to be verified, a situation caused in part by limited access to I/O pins. This is also due to the physical characteristics of the high-speed interfaces: characteristics such as differential signaling, encoded information, and strict impedance matching.
For these reasons, RTCA DO-254/ED-80 guidance allows for augmentation of board-level testing with results obained from tests on hardware items or components in isolation.
Hardware test equipment
Again, specialized test equipment is needed to ensure the DUT [device under test] is tested with the target frequencies (clocks). All test vectors must be applied at speed to the DUT and its responses must be captured and saved for further analysis or comparison against expected results.
As for where those expected results might come from, it makes sense for these to be the results obtained through simulation, which can be used to verify almost all of the DUT’s functional requirements.
In cases where test vectors are making changes to the I/O pins relatively slowly, and the FPGA design is controlled by a single clock, the analysis of the device response at the bit level is quite simple. However, when the FPGA design includes multiple asynchronous clock domains, supports several high-speed serial interfaces, and most likely contains an embedded processor core, the hardware response due to variable delays in real hardware (along with clock frequency and phase deviations) can produce nondeterministic responses.
The analysis of such nondeterministic results is very complicated. For one thing, it is very difficult to differentiate device behavior that is still within spec from truly unexpected behavior. Secondly, it is impossible to automate the process of comparing verification results against expected results. (Figure 2.)
[Figure 2 | The thick blue lines among the traces indicate differences between in-hardware test results and those from the expected results.]
In most cases, the nondeterministic device responses were delayed or reordered and can be considered to be within spec. Accordingly, too much time can be spent proving and documenting valid discrepancies.
A solution to the problem is to verify at a higher level of abstraction using TLM [transaction-level modeling], a very popular standardized methodology in the commercial ASIC [applications-specific integrated circuit] industry. Essentially, if an aspect of the design is to send a packet of data that should arrive intact and within a specified timeframe, then that is pretty much all that matters.
A transaction is a single conceptual transfer of high-level data or a control instruction, and is defined by a begin time, an end time, and attributes (relevant information associated with the transaction). Figures 3a and 3b show, respectively, the analysis of delayed and reordered transactions for a PCIe interface, using TLM.
[Figure 3a shows analysis of delayed transactions for a PCIe interface.]
[Figure 3b shows analysis of reordered transactions for a PCIe interface.]
At the transaction level – and whether PCIe, Ethernet or even lower-speed serial interfaces are being considered – if multiple buses and asynchronous clocks are used, the implementation details can be hidden for verification purposes.
However, let’s not forget that safety-critical projects must be tested against invalid data and under out-of-range scenarios. In many cases, designers are unable to predict the design response. Accordingly, behavior is investigated during the verification phase to determine if it is acceptable or not. Again, TLM makes the analysis much easier.
Using TLM, the test bench works with messages but the design is still verified with bit-level signals. In the simulation world, the use of BFMs (mentioned earlier) for modeling interfaces is very popular, but they are not synthesizable and cannot be reused in the real hardware. We need a new element called a transactor that is synthesizable. A transactor connects transaction-level interfaces to pin-level interfaces and translates the high-level message into bit-level (pin) wiggles.
Another important aspect of TLM is the use of an untimed test bench, also known as a transactional test bench (Figure 4). It focuses on functionality (messages) rather than on implementation (signals), and the test scenarios are implemented by sending request messages and waiting for the responses.
[Figure 4 | The transactor concept is shown: The incoming message is translated into correct signaling for the communications protocol.]
A great advantage here is that a transactional test bench can consist of subprograms written in any HDL [hardware description language] or even a programming language like C. (Figure 5.)
[Figure 5 | Shown: A comparison between a timed test bench (on the left) and a transactional one.]
A transactional test bench is much easier to maintain and analyze, which makes it valuable from a DO-254 perspective. It also simplifies the verification of multiple high-speed serial interfaces (as well as low-speed ones), making the overall verification more robust.
It must be noted that with TLM the whole design can be verified using transactions. However, while BFMs are available for standard interfaces like SPI, I2C, ARINC 429, and PCIe, the DUT’s other pins must still be verified. To do this they should be organized into GPIO [general-purpose input/output] interfaces supporting user-defined messages.
User-defined transactions will also appear for verification of the device containing embedded blocks, as presented in Figure 1. In such cases, to reuse the simulation test bench in hardware testing, the AXI BFM and transactor PCIe must support the same messages. [Figure 6.]
[Figure 6 | Shown: Reusing messages (transactions) from simulation for the in-hardware testing of a design containing embedded blocks.]
TLM in the industry
For complex designs that can exhibit nondeterministic behavior, TLM overcomes the limitations of bit-level verification, and is a best-practice methodology from the commercial ASIC industry. At the same time, by focusing more on the functionality than the implementation, the verification process is clearer, more robust, and easier to maintain. What’s not to like?
All of these approaches are in use within the avionics industry. Aldec’s solution for bit-level verification (which, as mentioned, is fine for less complex designs with single clocks) is called the DO-254/ED-80 CTS [compliance tool set]. Launched in 2008, the CTS features at-speed testing in the target device; reuses the simulation test bench for hardware testing; and integrates with third-party RTL simulator, synthesis, and place-and-route tools.
Most recently, the CTS has been (and continues to be) used by a Europe-based avionics company for transaction-based verification. This approach saves the company a great deal of time as, before switching to TLM, lots of time was spent looking at discrepancies between RTL simulations and in-hardware results – all because of the nondeterministic behavior of the device.¹
Janusz Kitel is DO-254 program manager at Aldec, with responsibility for verification solutions for aerospace and other industries in which safety-critical systems are employed. Janusz joined Aldec in 2006 as a member of the company’s software quality assurance team. Since 2013, his job has been focused on the development of Aldec’s aerospace solutions has focused on aviation regulations, requirements engineering, and design and verification methods for safety-critical applications. Janusz has an MSc in electronics and telecommunication, obtained from Silesian University of Technology in Gliwice, Poland.
Aldec * www.aldec.com