Military Embedded Systems

Software-defined radio: To infinity and beyond

Story

October 17, 2016

Manuel Uhm

Ettus Research

It's hard to believe that the term "software-defined radio" (SDR) has been around for approximately 30 years. That's a long time in the technology world, but SDR is still a common topic of discussion and carries more than its share of misconceptions. The definition of SDR - per the Wireless Innovation Forum (formerly the SDR Forum) - is "a radio in which some or all of the physical-layer functions are software-defined." The term is really focused on the physical (PHY) layer processing of the waveform, and not related to the radio-frequency (RF) front end, which is a common misconception. Radios with wideband tunable RF front ends capable of dynamic spectrum access are referred to as cognitive radio (CR). A cognitive radio is defined as a radio in which communication systems are aware of their internal state and environment, such as location and utilization on RF frequency spectrum at that location.Software-defined radio: To infinity and beyond

After so many years, SDR is now such a dominant industry – standard implementation for radios, from military tactical radios to cellular handsets – that it’s almost a given that a radio is an SDR. There will continue to be innovations in semiconductor and software technology that will continue to drive to higher development productivity and more cost effective SDRs, so there really is no end in sight for SDRs. These factors mean that SDR is really a solved problem and radios are now becoming frequency-agile and evolving to be CRs.

SDR evolves to become the de facto industry standard

A demonstration that SDR is a de facto industry standard is shown in Figure 1. Closest to the center, the dark blue section is representative of the first set of markets to move from hardware radio architectures to SDR architectures, regardless of whether they used the term SDR or not. These markets include signals intelligence (SIGINT), electronic warfare, test and measurement, public-safety communications, spectrum monitoring, and military communications (MILCOM). Some of these markets were using hardwired application-specific integrated circuits (ASICs), while some were already using programmable digital signal processors (DSPs.)

The technology drivers that really drove the move to SDR in these markets were the advent of RFICs from companies like Analog Devices and cost-effective DSP-intensive FPGAs from companies like Xilinx. These two technology drivers all came together to meet a multibillion dollar need in the military tactical radio market, creating something of a “market ripple,” where the market had a huge impact on the evolution of SDR technology far beyond just the MILCOM market. The JTRS [Joint Tactical Radio System] program funded the development and productization of both SDR and CR technology for military radios, which created a strong ecosystem of vendors including semiconductor, tools, and software companies. On the tools front, SDR required waveforms to be as portable as possible between different hardware platforms, which resulted in tools like the SCA [Software Communications Architecture] Core Framework, as well as better programming tools from electronic design automation (EDA) and semiconductor companies.

 

Figure 1: How successive generations of SDRs have come to dominate the radio industry and will continue to evolve.


Figure1

 

 

The advancement of RFICs, field-programmable gate arrays (FPGAs), and EDA tools were significant factors in enabling the second generation of SDRs being driven by 4G LTE infrastructure. Virtually all LTE eNBs (eNodeB or basestation) were developed with RFICs and FPGAs. Some of the larger infrastructure vendors would eventually go to ASICs, but even then, the baseband ASICs were largely programmable, as they used processors coupled to hardened blocks called hardware accelerators for compute-intensive functions such as turbo decoding that would typically exceed the performance or power limitations of the processors.

The next market ripple, shown in the third generation, occurred when 4G LTE handsets moved consistently to SDR architectures. This shift was enabled by low-power, high-performance DSP cores optimized for handsets offered by companies like Ceva, Tensilica, and Qualcomm. Similar to the baseband ASICs for infrastructure, these cores would be integrated into application-specific standard products (ASSPs) or ASICs for much of the PHY processing, coupled with hardware accelerators. Once this changeover occurred, SDRs increased orders of magnitude in volume and reach to become the de facto industry standard for radios.

The next generation of SDRs

The obvious question: What’s next for SDR and CR? As high as the volumes of 4G handsets has propelled SDR, the prospects of 5G, the IoT (Internet of Things), and sensor networks promise to again increase the volume of SDRs by another order of magnitude. What will be the technology driver lifting SDR to these lofty heights? Given that the previous drivers were innovations in analog and digital technology, it follows that the next technology driver would be the combination of analog and digital on a single monolithic chip in order to reduce cost and SWaP [size, weight, and power). For infrastructure, this driver could be FPGAs with integrated analog-to digital converters (ADCs) and digital-to-analog converters (DACs). For handsets and sensors, this could be application processors, also with integrated ADCs and DACs. Don’t forget software and tools, which is the whole point of SDR, after all. In order to enable the development of these chips, as well as the waveforms and application software running on them, there will be a requirement for better system-level tools that can be used to design and debug across the analog and digital domains, as well as program-heterogeneous processors on a single chip, including general-purpose processors (GPPs), DSPs, graphics processing units (GPUs), and/or FPGA fabric.

Breathing new life into old technology

With all this talk about the evolution of SDRs, it’s interesting to note that technology becoming more cost-effective has been a major driver in the adoption of SDR technology, enabling SDR to reach previously inaccessible markets such as handsets. This trend is not expected to go away, as high-volume markets are generally very price-sensitive.

Ettus Research, a National Instruments company, offers a super-heterodyne two-channel receiver daughtercard (Figure 2) called TwinRX. All previous Ettus Research RF daughtercards were direct-conversion architectures, which demodulate an RF carrier directly to baseband. Furthermore, the RFICs in Figure 1 that were a key technology driver for SDR used direct conversion; by eliminating the IF (intermediate frequency) stage, direct-conversion receivers could be smaller and lower-cost. This benefit usually came at a penalty of RF performance, however, including nonlinearity and poorer dynamic range. For this reason, super-heterodyne architectures are still common for SIGINT and direction finding (DF), where an increased ability to detect, monitor, and capture a signal of interest is critical. MES

 

Figure 2: Two super-heterodyne TwinRX daughtercards inside an Ettus USRP X310 SDR for four phase-aligned RX channels.


Figure2

 

 

Manuel Uhm is the director of marketing at Ettus Research, a National Instruments company. Manuel has business responsibility for the Ettus USRP, NI USRP, and BEEcube portfolios. Manuel is also the chair of the Board of Directors of the Wireless Innovation Forum (formerly the SDR Forum). He has served on the Board since 2003 in various technical, marketing, and financial roles. Manuel can be reached at [email protected].

Ettus Research, a National Instruments Company www.ettus.com

 

Featured Companies

Ettus Research

11500 North Mopac Expressway
Austin, TX