Getting the most out of multicore processors
StoryJuly 14, 2010
System integrators can now combine multiple applications with the emergence of multicore SoCs for better optimization.
During the past 10 years, the embedded market has undergone an SBC evolution, migrating from a single processor to two processors, then to dual-core process chips. More recently, the industry has seen the emergence of multicore System-on-Chip (SoC) devices like the P4080 from Freescale, which is available with eight processing cores. With all of these cores now at their disposal, system integrators are confronted with the challenge of how to optimize all these cores in embedded computing applications.
Saving system slots with multicore processors
One way to exploit these new microprocessor designs is to save board slots by taking applications that today require two or more SBCs and moving them onto a single SBC. For example, with a dual-core processor, a user can choose to treat the processor as two single-processor cores that share I/O and memory. Each individual core is then used to execute what would have otherwise been handled by separate SBCs. This approach can often be realized with a minimum of software code writing effort.
The benefits of SMP
Another option for taking advantage of multiple cores is to collapse the applications that would have otherwise been run on the two independent SBCs, then run them on the two Freescale 8640 cores in a Symmetric Multiprocessing (SMP) mode. This method offers the benefit of increased application efficiency, since the application can be run on either of the two cores at any time. This approach can require much more software coding than the former approach, because the system designer needs to ensure that the applications are aware that they are no longer running on a single processing core.
Interest in SMP is likely to grow as the number of cores increases. This expected growth in popularity is because SMP enables the system designer to concentrate more on the application itself, rather than on how to split the application(s) across the multiple cores now available. SMP frees the designer from having to program each core individually. But, not surprisingly, there is no free lunch. In an SMP configuration, one of the cores must serve as the gatekeeper. As the number of cores (and tasks) increases, the more the gatekeeper core is required to do. The danger is that demands on the gatekeeper can result in performance decreases. In this case, “more” does not necessarily mean “better.”
Implementing multicore devices
In all likelihood, most embedded system designers will likely end up using a combination of these two methods to address their application requirements. Figure 1, depicting four segments (a through d), presents examples of different configurations and potential methods for implementing multicore devices. The examples consider an eight-processing-core device.
Figure 1: Examples of different configurations and potential methods for implementing multicore devices.
(Click graphic to zoom)
Figure 1a illustrates the configuration discussed in the first method described, where all the cores are running under a single OS. This SBC configuration can be very effective if the applications do not require a lot of real-time processing.
With the large number of available cores, one design approach is to split the cores and run two OSs, either the same or similar, across a designated group of the cores. Figure 1b depicts the assigned cores, four per OS. This approach is better at addressing real-time issues than the first example. It also offers the associated advantages of running two different OSs and applications. But depending on the OSs used, there might be restrictions on which data can be accessed by each of them.
Figure 1c illustrates a configuration that allocates a separate core for some specific applications. This approach is well suited for applications that are real-time sensitive. In this type of configuration, for example, the first four cores can be assigned the general-purpose applications, while Core 5 and Core 6 could be dedicated to network stacks, Core 7 could be handling a secure gateway (encryption, decryption, and security protocols), and Core 8 could be responsible for high-speed serial communications.
Figure 1d shows yet another possible configuration. In this case, the first two cores are used in SMP mode and handle all the housekeeping functions, such as I/O and so on. The remaining six cores are used as processing engines, perhaps running a DSP algorithm or some other parallel-processing scheme, using the multiple cores to increase processing power and reduce processing time.
Meeting the multicore challenge
System integrators, given access to more and faster cores on a single processor, are just now realizing how best to optimize and harness all of the newly available processing power. Multiple cores can have great benefits and provide impressive results when used properly. But the proper method for leveraging multiple cores in a given application might not be immediately obvious. Trial and error might be needed before the proper configuration is achieved.
To learn more, e-mail Steve at [email protected].