factory

Streamlining Automotive Software Testing

Rigorous testing is an integral part of any software development effort. In the automotive world, it is imperative to accurately test how any new or updated software will perform on vehicle hardware and how it might interact with other hardware and software.

As global teams develop software within new software architectures, it becomes extremely challenging to test for a wide range of conditions on hardware test benches tied to specific locations — all while trying to speed time to market.

Test benches have to be configured manually, which can lead to errors. If someone forgets to ask a technician on-site to update the bench configuration, the test has to be restarted. Downloading the gigabytes of software images necessary to run tests on a hardware test bench is time-consuming, especially if the test site lacks reliable network bandwidth.

In addition, test benches may be scarce, particularly in the early stages of software development. Some projects may have to accommodate dozens of configurations, each of which has to be tested. There may be multiple testing environments, running on different operating systems, with different test suites and different hardware. All of these factors hinder continuous testing.

Orchestration needed

To meet these challenges, developers can leverage a cloud-based tool to manage, configure, reserve and schedule testing assets, wherever they may physically reside. Wind River Studio Virtual Lab provides that capability, enabling developers to test their code on hardware-in-the-loop test benches distributed across locations connected to the cloud — making those test benches available to widely distributed software development teams and more fully utilizing the investment in the hardware.

The capability can be extended to software-in-the-loop simulations running in the cloud. If the software is containerized and uses middleware with a common interface, the software can be fully abstracted from the hardware elements, allowing for fully automated software testing without a dependency on the availability of hardware. A full-system software simulator can mimic how the software would behave on a specific system-on-a-chip — and the elastic compute capability of the cloud means that simulations can run faster and that more of them can run in parallel.

Whether a software-only or hardware-based test approach is appropriate depends on several factors. One is the functionality being tested; for example, power consumption results can be difficult to obtain in a software-only environment, and vehicle system diagnostics can be impossible to mimic. Another factor is whether hardware test benches are available at the time, since they are often in demand and limited to select locations. And it can also depend on the stage of software development: Early in the process, a quick, software-only check may be sufficient, whereas later in development, a fuller evaluation involving hardware or even the full vehicle itself (as vehicle-in-the-loop testing) could be appropriate.

Connecting those tests to the actual software under development is Wind River Studio Test Automation. This tool sits at a higher level, helping developers choose which tests need to be run against a certain piece of code based on its functionality, dependencies and other software it might affect. It can automate software configuration to ensure that all relevant cases are accounted for. It then hands off requests to Studio Virtual Lab.

Benefits of this approach

With access to both hardware test benches and software-based simulation resources, development can scale up or down as needed. In the past, there would be a queue for each hardware-based test. After each test, the system would be reset, then the next would start. Now, many tests can be run in parallel in the cloud, speeding time to market.

Because organizations can run more tests — and do so earlier in the development process, as so-called shift-left testing — they can achieve higher quality in all software components earlier as well, building a foundation for a fully automated development toolchain of continuous integration and deployment.

In the future, AI could be used to determine which tests should be run and which are not relevant for a particular piece of code. With that, there is the potential to further automate the process, hone the software quality and deliver software improvements to vehicles sooner.