Nanosats put AI-at-the-edge computing to the test in spaceStory
November 16, 2020
The U.S. military is harnessing and exploring algorithms and machine learning, not just on the ground but also 300-plus miles aloft for small-form-factor space applications.
Artificial intelligence (AI) is rapidly being explored or adopted by the U.S. military for many applications, and one of the most intriguing is tiny satellites, sometimes called nanosats. Machine learning (ML) is creating new opportunities for spacecraft avoidance, automated retasking of sensors based on detected and predicted environmental changes, and direct downlink of mission-significant products to end users.
One noteworthy small-satellite project currently underway is being run by the Space and Engineering Research Center at the University of Southern California’s Information Sciences Institute. The goal for its four La Jument nanosatellites is to enhance AI and ML space technologies. Lockheed Martin is building mission payloads for the nanosats, which will use the company’s SmartSat software-defined satellite architecture for both the payload and bus. SmartSat is designed to let satellite operators quickly change missions while in orbit with the simplicity of starting, stopping, or uploading new applications.
“Onboard machine learning in space has many benefits, including improving satellite autonomy and decreasing the time between collecting sensor data and distributing it,” says Adam Johnson, La Jument program director and software engineering director for Lockheed Martin Space (Denver, Colorado). “Today, most missions are planned hours to months ahead of time by analysts on Earth, with autonomy limited to only making critical decisions for navigation and health and status monitoring.”
The La Jument nanosats will enable AI/ML algorithms in orbit, thanks to advanced multicore processing and onboard graphics-processing units. An app being tested is an algorithm known as SuperRes, developed by Lockheed Martin, which can automatically enhance the quality of an image in the same way as a smartphone does. SuperRes enables exploitation and detection of imagery produced by lower-cost, lower-quality image sensors. (Figure 1.)
[Figure 1 | Pictured is an artist rendering of La Jument nanosatellites. Credit: University of Southern California.]
SmartSat also provides cyberthreat detection, while the software-defined payload houses advanced optical and infrared cameras used by Lockheed Martin’s Advanced Technology Center to qualify AI and ML technologies for space.
These systems are powered by the NVIDIA Jetson platform, built on top of the CUDA-X capable software stack, and supported by NVIDIA Jetpack software development kit. This configuration facilitates powerful AI-at-the-edge computing capabilities to unlock advanced and digital-signal processing.
While there are significant benefits to using AI in nanosats, it also poses a few challenges.
One major challenge “is the orders of magnitude difference between the compute capacity available aboard a spacecraft vs. on the ground,” Johnson points out. “Today, cloud computing offers flexible storage and highly scalable compute options. In space, processors are several generations behind because they must be shielded against the sun’s radiation, which adds significant cost.”
Lockheed Martin Space is addressing this challenge in several ways, including partnering with universities to research optimizing algorithms for low-powered embedded devices and spacecraft with intermittent connectivity.
“We’re leveraging our university partnerships as well as scientists from our Advanced Technology Center to improve fault tolerance of nontraditional space compute devices while exploring techniques for injecting fault tolerance directly into machine-learning algorithms that execute on devices susceptible to radiation effects,” Johnson adds.
Another major challenge currently being addressed in the AI for nanosats arena is the substantial difference between space and terrestrial environments. “Many AI/ML engineers are accustomed to using high-powered discrete graphics processing units (GPUs) for machine-learning tasks,” Johnson says, “whereas deployments to spacecraft might require targeting a field-programmable gate array (FPGA) or low-powered embedded GPU on a system-on-a-chip.”
AI on orbit
SmartSat software-defined satellite architecture enables artificial intelligence on-orbit that wasn’t previously possible.
“Today, remote-sensing satellites collect terabytes of data that must be downlinked to a ground station where it’s processed and reviewed,” Johnson says. “But SmartSat-enabled satellites could carry mission applications onboard the satellite – including AI – that will conduct processing on the satellite. Doing so means the satellite would only transmit the most relevant data, saving on downlink costs and letting ground analysts focus on the data that matters most.” (Figure 2.)
[Figure 2 | SmartSats is a software-defined satellite architecture created by Lockheed Martin. Credit: Lockheed Martin.]
CubeSats are providing an ideal, low-cost proving ground for Lockheed Martin Space’s software and hardware technologies. “Programs like La Jument are helping advance technology development and to gather meaningful flight data we can use to improve and refine our products,” Johnson asserts.
Lockheed Martin develops single-board computers (SBCs) as well as dedicated processing cards containing FPGAs and GPUs, determining appropriate processing capacity required based on customers’ mission needs and spacecraft size, weight, and power constraints.
“From a software architecture perspective, we use SmartSat open architecture as our application hosting platform across ground and space assets,” Johnson says. “We leverage various open source and vendor-provided AI/ML frameworks and libraries, including PyTorch, ONNX, and TensorFlow. And we also maintain a significant set of internally developed AI/ML-focused software ranging from space-mission management and command and control to specific mission algorithms.”
AI and autonomy are quickly being adopted by the commercial sector within environments that are predictable and where technology can operate from existing data. The end-user situation is a little different when it comes to government and military systems.
“To integrate AI and autonomy into government and military systems that operate within extreme, highly variable environments requires both technological expertise and deep experience working with defense systems,” Johnson says.
Cloud computing and storage are also opening the door for more widespread AI development on the ground. In space, “on-orbit processing like SmartSat and cloud-computing structures like Lockheed Martin’s SpaceCloud are opening new doors for AI in space,” he adds. “On-orbit processing ultimately saves time and money because the satellite is no longer tied to its downlink window to send data. The onboard computer can analyze and process data, gaining new insights about data that was simply dumped in the past.”
One of the biggest hurdles for AI so far is trust: “Trusting the behavior and outcomes of our systems is critical to our collective success,” Johnson notes. “The challenge we have as a society is where we place that human within the loop. AI will never replace human intelligence, but it will augment and enrich it.”
Trust is such a critical aspect of AI that “we must be just as strategic about trust as we are about our missions,” he adds. “In space, our systems are thousands of miles away. It’s not easy or even possible to send a repair crew to fix something. Likewise, our astronauts on the International Space Station or the first ones to land on Mars will rely on systems that can predict, self-diagnose problems, and fix themselves while continuing to perform without failing. Human lives depend on it.”
La Jument launches
The first La Jument satellite is a student-designed and -built 1.5U CubeSat that will launch before the end of 2020 with a SmartSat payload. It will test the complete system from ground to space, including ground-station communications links and commanding SmartSat infrastructure while in orbit.
The second to launch is a 3U nanosat, roughly the size of three small milk cartons stacked atop each other, with optical payloads connected to SmartSat to allow AI/ML in-orbit testing. This 3U nanosat is scheduled to launch in February 2021.
The final launch in the La Jument sequence will be a pair of 6U CubeSats, which are being designed jointly by Lockheed Martin Space and a team at the University of Southern California (USC – Los Angeles, California). These will launch mid-2022, and are set to include future research, including new SmartSat apps, sensors, and software bus technologies.
U.S. Army embraces algorithms for situational awareness
Researchers are creating a way to get information updates to warfighters faster via new machine-learning (ML) techniques.
A new method to train classical ML algorithms to operate within constrained environments – especially ones involving coalitions that can be used within various devices by soldiers – has been created by a team of researchers from the U.S. Army’s Combat Capabilities Development Command’s Army Research Laboratory Defense Science and Technology Laboratory (Aberdeen Proving Ground, Maryland), IBM Thomas J. Watson Research Center (Yorktown Heights, New York), and Pennsylvania State University (State College, Pennsylvania).
Tactical networks tend to suffer from intermittent and low-bandwidth connections within hostile operation environments. Even though artificial intelligence (AI) techniques can potentially improve the situational awareness of soldiers to keep them updated about fast-changing situations, “machine-learning models need to be retrained using updated data, which is often distributed across data sources with unreliable or poor connections,” says Ting He, an associate professor at Penn State.
This challenge demands new generations of model-training techniques, the researchers say, to strike a desirable tradeoff between the quality of the obtained models and the amount of data transfer needed.
To tackle this balance, they created “coreset,” which uses the approach of a lossy data-compression technique designed for ML applications. It filters and discards redundant data to reduce the amount of data that must be compressed.
“A smaller version of the original dataset that can be used to train machine-learning models with guaranteed approximation to the models trained on the original dataset,” He explains. “However, existing coreset construction algorithms are each tailor-made to a targeted machine-learning model. Multiple coresets need to be generated from the same dataset and transferred to a central location to train multiple models, offsetting the benefit of using coresets for data reduction.”
So the team set out to explore different coreset construction algorithms with respect to the ML models they are used to training, with a goal of developing a coreset construction algorithm whose output can simultaneously support the training of multiple ML models with guaranteed qualities.
“Our study revealed that a clustering-based algorithm has outstanding robustness compared to the other algorithms in supporting both unsupervised and supervised learning,” He says.
The team also developed a distributed version of the algorithm with a very low communication overhead. “Compared to training a neural network on the raw data, training it on a coreset generated by our proposed algorithm can reduce the data transfer by more than 99% at only an 8% loss of accuracy,” He notes.
This result means that the coreset can enhance the performance of machine-learning algorithms, especially within those tactical environments where bandwidth is scarce.
“Given advanced techniques to increase the rate at which analytics can be updated, soldiers will have access to updated and accurate analytics,” says Kevin Chan, an electronics engineer at the Army lab. “This research is crucial to Army networking priorities in support of machine learning that enables multidomain operations, with direct applicability to the Army’s network modernization priority.”
The new algorithm is straightforward to use with various data-capturing devices – including high-volume, low-entropy devices such as surveillance cameras – to significantly reduce the amount of collected data while ensuring guaranteed near-optimal performance for a broad set of ML applications, according to He.
As a result, soldiers will be able to obtain faster updates and smoother transitions as the situation changes at a competitive accuracy.
Beyond applications within the military domain, coresets and distributed ML in general “are also widely applicable within the commercial setting, where multiple organizations would like to jointly learn a model but cannot share all their data,” says Shiqiang Wang, an IBM Research staff member and a collaborator on this work.
Going forward, the team will be exploiting various ways of combining coreset construction with other data-reduction techniques to achieve more aggressive data compression at a controllable loss of accuracy.
“We’re exploring how to optimally allocate bits between coreset construction (generating more samples) and quantization (having a more accurate representation per sample),” He says. “We’re also exploring how to optimally combine two approaches: reducing the number of data records using coreset and reducing the number of features per data record using dimensionality-reduction techniques.”
AI and ML “are promising techniques to revolutionize how we operate our networked systems and satisfy users’ information needs,” He notes.