Military Embedded Systems

Is open delaying the future of cognitive computing?

Story

September 17, 2014

Dr. Ian Dunn

Mercury Systems

Open systems have the intended and unintended consequence of delaying innovation in computing. While industry players agree to standardize and build complex systems that eventually become useful products, this process can be lengthier than a single organization working alone.

On the other hand, open systems drive the economic reality behind Moore's Law; without the economic reality of open systems, it is probable that Moore's Law would not be very valuable. Open systems are what unleash it.

By roughly 2040, assuming Moore’s Law continues unabated, the ability to purchase the capacity of a human brain for about $1,000 will be a reality. That’s an astounding concept. Will it shed tears? Will it be happy? The 2013 movie “Her” was essentially based on this precept. As is typically the case, science fiction may be predicting the not-too-distant future. Because of this movie, it is more than likely that there are kids somewhere thinking about the future in these terms. The reality is that we don’t have very far to go; 2040 is not all that far off. There has been an enormous amount of innovation since Moore’s Law was introduced in 1965, in what is really an extremely short period of time in the context of human and technological development.

In my opinion, the demonstrable evolution of Moore’s Law can be divided into three distinct stages of computing – processing, exploitation, and open – as shown in Figure 1. The shaded areas in the diagram represent the commoditization of those stages, the point where they have economic value.

 

Figure 1: The demonstrable evolution of Moore’s Law can be divided into three distinct stages of computing: processing, exploitation, and open. The shaded areas of the diagram represent the point at which these three stages have an economic value.

(Click graphic to zoom by 1.9x)


21

 

 

Early in the first stage, there was no economic value because processing was very advanced and was completed primarily in the world’s laboratories. By about 1990, however, with the advent of desktop computing, basic processing was commoditized and thereby spawned a wide range of applications like email, spreadsheets, Gantt charts, and other tools. We are now seeing the commoditization of the second stage – exploitation. Exploitation, in this case, means individuals and companies that research, collect, evaluate, and analyze information and sell it to governments and businesses. This information, commonly known as Big Data, is used for a wide range of applications from fraud prevention and medical research to political canvassing and marketing. One very public example of this type of exploitation is the leak of classified documents by Edward Snowden in 2013.

The advances in processing and exploitation have led to two distinct phases of application development – adaptive and cognitive. The commoditization of processing in the embedded and non-embedded markets resulted in wide-range adaptive systems, which can adjust in real-time to changing parameters. A representative example is traffic lights: Historically, traffic lights operated on fixed timers programmed by officials who used traffic pattern studies to determine optimal timing. As traffic patterns shifted over time, however, bottlenecks sprang up. In an adaptive system, a computer tracks traffic in real-time and determines the timing of the lights to keep traffic moving optimally. Other adaptive systems could include doors that open when someone comes near them or robots that adapt to changes in their environment. What we are going to see next, where exploitation and cognitive application development meet, is the commoditization of cognitive systems. An early example of this was IBM’s “Watson” question-answering computing system that competed on the quiz show “Jeopardy.”

The interesting thing about open systems, the third stage of computing, is that in the last 20 years in the embedded market, open systems has always lagged behind the frontier of innovation by as much as five years or more. Watson competed with the other game show participants in 2011 and won. Will it take five or 10 years before the same thing is done with an Android phone with an open architecture? Five years is a realistic time frame, given what is known about the way open systems are created.

Again: Open is slower when it comes to innovation. From an economic perspective, what would happen to this curve if open didn’t exist? Would the curve flatten, be nonexistent, or have an entirely different price characteristic than is what is represented?

In order to fully capitalize on open systems, the embedded industry must focus on two areas. The first is the difference between market expansion, and time and control of the solution. We know that when any company opts to embrace open systems, it is pioneering it, participating in its development, or using it. Regardless of how a company is involved, the company relinquishes control over the time and cost of the return on the investment in the open architecture. In theory, the corporation benefits from market expansion and performance efficiencies.

As a community, our industry must do a better job of organizing to manage this tradeoff. This is particularly important to cognitive computing, which not only is the next frontier of complexity for our industry, but also is essential for much of what our industry hopes to accomplish. So what is considered essential? It is not the ability to predict consumers’ shopping habits and trends. Rather, it is helping to find a cure for cancer, Alzheimer’s, and other diseases. It is improving the human factor in war to minimize casualties. These are examples of the essential capabilities that cognitive systems may provide in the future.

There has been some improvement in time management. Unlike in the past, today we are driving standards development rather than just allowing a standard to unfold as an engineering activity. However, in order to deliver essential capabilities in a reasonable timeframe, our industry must focus on speed of adoption and devise a better way of managing the existing tradeoff.

Every day there are groundbreaking technologies – as predicted by Moore’s Law – that create tremendous innovation. Unfortunately, there are not enough adopters. Again, we experience the dichotomy between open systems and speed of adoption of the power predicted by Moore’s Law. There is a real concern that the current lag time in innovation associated with open systems may expand. Our nation’s security industry provides a real example of this possibility. The F-35 Joint Strike Fighter, for example, requires complex open systems, which are increasingly difficult to assemble and in many cases, cost more money.

The second area of focus is open innovation. While “open innovation” is a widely used and diluted term, there is a narrower interpretation that makes sense. Open innovation is not really about innovation; instead, it is about access to talent a company on its own may not otherwise be able to afford to employ directly. In essence, any professional in our industry is qualified to work for any company, yet the economics of the industry don’t make that possibility practical or affordable. Vertical integration of this sort simply can’t exist from a financial perspective.

The only viable alternative to vertical integration and collaboration among the best and brightest minds of the industry lies in standards communities like VITA and others. The challenge, unfortunately, is that the industry has devolved to the point of having one or two proprietary pioneers, as seen in the case of the Watson computer. Years after the introduction of a technology, an open community may be established to focus on and solve a particular problem related to these proprietary technologies. This time lag, and the threat of a potential open system with the demonstration of a proprietary system, makes innovating difficult at best.

What is needed is innovation in the way the industry manages supply chains and collaboration. The standards community must take the lead and pioneer changes and improvements in this area. If our industry would focus on developing ways to manage supply chains and ways to create virtual communities where innovation thrives, we can achieve the Moore capabilities that customers want, maintain more control, and improve the overall return on investment.

At present, cognitive systems are the most important target zone for this community. We have more computing power right now than our customers and our marketplace can adopt. We have more speed and more switching capabilities than we can use. Some may argue that there is not enough I/O as is sometimes needed. However, even when I/O is charted against Moore’s Law, the industry is vastly outstripping the capabilities of physical systems in many ways.

The question remains: What is going to happen with all that computing power? It’s going to be put into cognitive capability. Systems will become much more adaptive than they are today. Hopefully that will help extend human life, reduce or eliminate disease, improve human factors in wars, increase productivity levels, and provide opportunities for better socialization and interaction for all of us.

Dr. Ian Dunn is currently the VP and General Manager of the Embedded Products Group, part of Mercury Systems’ Commercial Electronics business unit. He is responsible for embedded product development across the entire sensor-processing chain. Previously, Dr. Dunn was VP and General Manager of Mercury’s Microwave and Digital Solutions Group and prior to that was the company’s CTO responsible for technology strategy and R&D projects. Dr. Dunn joined Mercury Computer Systems in 2000 as a systems engineer upon completing a Ph.D. at Johns Hopkins University in Electrical Engineering. As a doctoral student at Johns Hopkins, Dr. Dunn consulted for Disney Imagineering and Northrop Grumman on distributed automation and various high-performance computing projects. He has 20 years of experience designing and programming parallel computers for real-time signal processing applications and has authored numerous papers and a book on designing signal-processing applications for high-performance computer architectures. Please direct any questions to [email protected].

Mercury Systems 866-627-6951 www.mrcy.com

 

Featured Companies

Mercury Systems

50 Minuteman Road
Andover, Massachusetts 01810