Get In Touch
Cover nov2024 104x80.jpg
Current Issue
section
logo

Continental sets up its own supercomputer

By Niranjan Mudholkar,

Added 29 July 2020

For vehicle AI system training into operation

The supercomputer is located in a datacenter in Frankfurt, which has been chosen for its proximity to cloud providers and, more importantly, its AI-ready environment, fulfilling specific requirements regarding cooling systems, connectivity and power supply.

In order to develop innovative technologies even more efficiently and quickly, Continental has invested in setting up its own supercomputer for Artificial Intelligence (AI), powered by NVIDIA InfiniBand-connected DGX systems. It has been operating from a datacenter in Frankfurt am Main, Germany, since the beginning of 2020 and is offering computing power as well as storage to developers in locations worldwide. AI enhances advanced driver assistance systems, makes mobility smarter and safer and accelerates the development of systems for autonomous driving.
"The supercomputer is an investment in our future," says Christian Schumacher, head of Program Management Systems in Continental's Advanced Driver Assistance Systems business unit. "The state-of-the-art system reduces the time to train neural networks, as it allows for at least 14 times more experiments to be run at the same time."
Continental's supercomputer is built with more than 50 NVIDIA DGX systems, connected with the NVIDIA Mellanox InfiniBand network. It is ranked according to the publicly available list of TOP500 supercomputers as the top system in the automotive industry. A hybrid approach has been chosen to be able to extend capacity and storage through cloud solutions if needed. "The supercomputer is a masterpiece of IT infrastructure engineering," says Schumacher. "Every detail has been planned precisely by the team - in order to ensure the full performance and functionality today, with scalability for future extensions."
Advanced driver assistance systems use AI to make decisions, assist the driver and ultimately operate autonomously. Environmental sensors like radar and cameras deliver raw data. This raw data is being processed in real-time by intelligent systems to create a comprehensive model of the vehicle's surroundings and devise a strategy on how to interact with the environment. Finally, the vehicle needs to be controlled to behave like planned. But with systems becoming more and more complex, traditional software development methods and machine learning methods have reached their limit. Deep Learning and simulations have become fundamental methods in the development of AI-based solutions.
With Deep Learning, an artificial neural network enables the machine to learn by experience and connect new information with existing knowledge, essentially imitating the learning process within the human brain. But while a child is capable of recognizing a car after being shown a few dozen pictures of different car types, several thousand hours of training with millions of images and therefore enormous amounts of data are necessary to train a neural network that will later on assist a driver or even operate a vehicle autonomously. The NVIDIA DGX POD not only reduces the time needed for this complex process, it also reduces the time to market for new technologies.
"Overall, we are estimating the time needed to fully train a neural network to be reduced from weeks to hours," says Balázs Lóránd, head of Continental's AI Competence Center in Budapest, Hungary, who also works on the development of infrastructure for AI-based innovations together with his groups in Continental. "Our development team has been growing in numbers and experience over the past years. With the supercomputer, we are now able to scale computing power even better according to our needs and leverage the full potential of our developers."
To date, the data used for training those neural networks comes mainly from the Continental test vehicle fleet. Currently, they drive around 15,000 test kilometers each day, collecting around 100 terabytes of data - equivalent to 50,000 hours of movies. Already, the recorded data can be used to train new systems by being replayed and thus simulating physical test drives. With the supercomputer, data can now be generated synthetically, a highly computing power consuming use case that allows systems to learn from travelling virtually through a simulated environment.
This can have several advantages for the development process: Firstly, over the long run, it might make recording, storing and mining the data generated by the physical fleet unnecessary, as necessary training scenarios can be created instantly on the system itself. Secondly, it increases speed, as virtual vehicles can travel the same number of test kilometers in a few hours that would take a real car several weeks. Thirdly, the synthetic generation of data makes it possible for systems to process and react to changing and unpredictable situations. Ultimately, this will allow vehicles to navigate safely through changing and extreme weather conditions or make reliable forecasts of pedestrian movements - thus paving the way to higher levels of automation.
The ability to scale was one of the main drivers behind the conception of the NVIDIA DGX POD. Through technology, machines can learn faster, better and more comprehensively than through any human-controlled method, with potential performance growing exponentially with every evolutionary step.
The supercomputer is located in a datacenter in Frankfurt, which has been chosen for its proximity to cloud providers and, more importantly, its AI-ready environment, fulfilling specific requirements regarding cooling systems, connectivity and power supply. Certified green energy is being used to power the computer, with GPU clusters being much more energy efficient than CPU clusters by design.
END