DornerWorks

Benefits and Advantages of Virtualization for Embedded Products

Posted on December 3, 2019 by DornerWorks Ltd.

DornerWorks Virtuosity®, based on open source Xen, is a quick, easy, and most importantly free way for developers to add virtualization to their Xilinx Zynq UltraScale+ MPSoC or NXP i.MX8 SoC based platforms.

We’ve been interested in using type-1 hypervisors like Xen for many years, and we have even won a Small Business Innovation Research (SBIR) Award in 2011 to explore some of our ideas. Virtualization extensions added to the ARMv7 processors started the trend, but the improved and enhanced capabilities of the ARMv8 architecture, coupled with advanced fabricating techniques that allow for the production of a complex System on Chip (SoC) with multiple cores has really sparked interest in, and the need for, virtualization in the embedded space.

The impetus for that interest is due to the number of benefits that virtualization can bring to an embedded solution. Just like the server market twenty years ago, embedded processors have crossed an inflection point where the value of being able to easily manage the processing power provided by the silicon surpasses the value of the lost processing power due to overhead incurred by the technology enabling it. For the embedded space, it helps that the ARMv8’s virtualization features allow Xen to run with very little overhead and that Xen has had over fifteen years to mature in the server market.

We have identified three main classes of reasons to use virtualization technologies like Xen on your embedded project: 1) to reduce cost/schedule, 2) to enable new or improved features, and 3) to reduce project and product risk. These are powerful benefits that can be enjoyed by your project without a recurring license fee. You can even try it out with no up-front costs to see how Xen-provided virtualization can meet your needs. Best of all, DornerWorks is ready and able to provide expert advice and support if you decide you need customizations, additional features, or Xen consultation.

Virtualization helps you reduce cost and schedule

The primary benefit of using virtualization is that it can reduce the production cost of your product, both in nonrecurring engineering and unit cost, while also helping to reduce schedule. The main way that virtualization helps you accomplish this is by allowing you to combine and consolidate different software components while still maintaining isolation between them. This feature of virtualization enables maybe-use cases.

One such use case is reducing of size, weight, power, and cost (SWaP-C) by reducing part count. Thanks to Moore’s law, modern multi-core processors like ZUS+ are processing powerhouses, often providing more computation power than needed for a single function. The ability to consolidate while maintaining isolation allows you to combine software that otherwise might have been deployed on multiple hardware systems or processors onto a single MPSoC chip. A single hardware platform is also easier to manage than a multi-platform system.

Blog Graphic 1

Consolidation with isolation can also be used to enforce greater decoupling of the software components. Coupling between software components leads to all kinds of problems with development, integration, maintenance, and future migrations. This is because coupling leads to dependencies, sometimes implicit or unknown, between the components such that a change or addition to one software unit often has a wide-reaching and unexpected ripple effect. Running different software functions in their own VMs leads to very strong decoupling where any dependencies between software functions are made explicit through configuration or I/O calls, making it easier to understand and eliminate unintended or unexpected interactions. Strong decoupling also allows greater freedom to develop the software functions in parallel with a higher confidence that the different pieces won’t interfere with one another during integration.

As an aside, this level of decoupling is critical in applications needing security or safety certification, as it is a requirement to show certification authorities that there are no unintended interactions. By restricting and reducing the amount of intended interaction with strict design decoupling and VM isolation, you can also reduce re-certification costs by being able to show how changes and additions are bounded to the context of a particular VM.

Even outside the realm of safety and security considerations, the ability to replace a software function with a compatible one without known that adding software in a new VM won’t cause existing software functions to fail. It is also easier to reuse software components developed for one project on another, simply take the VM in which it runs and deploy it to run as a guest in a different system, allowing a mix-n-match approach with your existing software IP.

Blog Graphic 2

Virtualization helps you enable new and improved features

The capabilities provided by Xen virtualization can also be used to enable new features and improve old ones. The isolation, i.e. sandbox them, so that a breach or failure in one VM is limited to that VM alone. Not even security vulnerabilities in the VM’s OS would result in the compromise of functions in another VM, providing an in-depth defense.

The capability to consolidate disparate software functions enables the implementation of a centralized monitoring and management function that operates externally to the software functions being monitored. This MM function could be used to detect and dynamically respond to breaches and faults; for example: restarting faulted VMs or terminating compromised VMs before the hacker could exploit it. A centralized monitoring function could also prove useful in embedded applications which have a greater emphasis on up-time. The monitoring function could detect or predict when a VM is faulting, or about to fault, and ready a backup VM to take over with minimal loss of service.

There are other cases that are common in the server world where VMs are managed algorithmically by other programs, being created, copied, migrated, or destroyed in response to the predefined stimulus. Virtualization enables guest migration, where the entire software stack, or part of it, could be moved from one VM to another, potentially on another platform entirely. This could be an important enabler for self-healing systems. Migration can help with live system upgrades, where the system operator could patch the OS or service critical library in a backup copy of the VM then test the patched VM, again with a minimal loss of service. Another use case seen in the server market is the ability to perform load balancing, either by dynamically controlling the number of VMs running to meet the current demands or by migrating VMs to a computing resource closer to where the processing is actually needed, reducing traffic on the network.

 

Virtualization helps you reduce program and product risk

Virtualization can be used to reduce program risk by providing means to reconcile contradictory requirements. The most obvious example is the case where two pre-existing applications are needed for a product, but where each was developed to run on a different RTOS. In this case, the contradictory requirements are regarding the OS to use. Other examples including different safety or security levels, where isolation allows you to avoid having to develop all your software to the highest level, or using software functions with different license agreements.

Long-lived programs can also benefit from the ability to add new VMs to the system at a later date, creating a path for future updates. Likewise, in a system using VMs, it becomes easier to migrate to newer hardware, especially if the hardware supports backward compatibility like the ARMv8 does for the ARMv7. Even if it isn’t, thanks to Moore’s Law, never processors will have even greater processing capabilities, and emulation can be used in a VM to provide the environment necessary to run legacy software.

Virtualization can also be used to reduce the risk of system failure during runtime. Dynamic load balancing was previously mentioned, which can also be considered one way to reduce the risk of failure, but virtualization can also be used to easily provide redundancy to key functionality by running a second copy of the same VM. With the aforementioned centralized monitoring, the redundant VM can even be kept in a standby state, and only brought to an active state if data indicates a critical function is experiencing issues or otherwise about to fail.

DornerWorks Ltd.
by DornerWorks Ltd.
DornerWorks
Technology engineering so you can focus.