Grid Platform for Speeding Up Car Crash Simulations

Background

The client is one of the world’s biggest automotive suppliers. To develop airbag ECUs, automotive suppliers must fine-tune their crash detection algorithms so that they are accurate down to the millisecond level, as this can be the difference between life and death.

Challenges

Fine-tuning these algorithms requires a lot of work from the development team, but also a lot of crash simulations to observe how the algorithms behave. These crash simulations take a long time to execute, often having to be run overnight and in some situations even over the weekend. As this slowed down the development process, the client decided to find and test new simulation algorithms and explore methods to reduce the time it takes for car crash simulations.

Solution

The client partnered with our team members to create metrics for testing these new simulation algorithms and methods and test them accordingly. During the testing phase, our team observed that even though the simulations took a lot of time, not all developers were running them at all times; in fact, most were programming or fine-tuning the crash detection algorithms, causing a lot of computing power to go unused.

With this in mind, our team members proposed the client to take advantage of these vast untapped computing resources by making the crash simulations run in parallel on multiple PCs. Using only the network’s available resources, they could avoid disturbing the developers while running simulations as quickly as possible by using all available resources. Seeing the huge potential to speed up the development and reduce costs, the client quickly allocated time and budget so that our team members could begin.

The main goals were:

  • Split car crash simulations into small work chunks that could be efficiently distributed to as many computers as possible among the network.
  • Run the work chunks in parallel on multiple computers in the network that had free computing resources, without impacting the developers using those computers.
  • Ensure that users starting the simulation would see no visible changes. Get the system to parallelize as much as possible or default to running the simulation locally if network resources were unavailable.
  • Create a robust system so that if a work chunk were to fail on one PC due to increased demand for resources by the developer or even a system error, that work would be transparently moved and restarted on another PC.

To achieve these goals, our team members had to significantly change the inner workings of the program. Simulations needed the ability to be split into small work chunks that could be distributed across the network. The program would then have to wait until chunks were processed and results were back, and then reassemble the results as if the entire simulation was running locally.

As the simulation tool was originally written in native C++ using MFC and QT, the new system was also developed around these technologies in order to ensure maximum performance.

The client partnered with our team members to create metrics for testing these new simulation algorithms and methods and test them accordingly. During the testing phase, our team observed that even though the simulations took a lot of time, not all developers were running them at all times; in fact, most were programming or fine-tuning the crash detection algorithms, causing a lot of computing power to go unused.

With this in mind, our team members proposed the client to take advantage of these vast untapped computing resources by making the crash simulations run in parallel on multiple PCs. Using only the network’s available resources, they could avoid disturbing the developers while running simulations as quickly as possible by using all available resources. Seeing the huge potential to speed up the development and reduce costs, the client quickly allocated time and budget so that our team members could begin.

The main goals were:

  • Split car crash simulations into small work chunks that could be efficiently distributed to as many computers as possible among the network.
  • Run the work chunks in parallel on multiple computers in the network that had free computing resources, without impacting the developers using those computers.
  • Ensure that users starting the simulation would see no visible changes. Get the system to parallelize as much as possible or default to running the simulation locally if network resources were unavailable.
  • Create a robust system so that if a work chunk were to fail on one PC due to increased demand for resources by the developer or even a system error, that work would be transparently moved and restarted on another PC.

To achieve these goals, our team members had to significantly change the inner workings of the program. Simulations needed the ability to be split into small work chunks that could be distributed across the network. The program would then have to wait until chunks were processed and results were back, and then reassemble the results as if the entire simulation was running locally.

As the simulation tool was originally written in native C++ using MFC and QT, the new system was also developed around these technologies in order to ensure maximum performance.

Results

This project was a huge success for the client. In addition, our team managed to reduce the duration of large simulations by up to 50 times.

Key features:

  • Reduced car crash simulations duration by up to 50 times
  • Parallelization of the simulation system
  • Robust high performance grid system

Industry: Grid Computing, Automotive

Technologies: Visual C++, C#, MFC, QT, Win API

Tools: Visual Studio, SVN

Start your project today!