Teaching a machine
to see the world


Probably no other term has received such tremendous attention and hype over the past few years as “artificial intelligence,” or AI for short. AI – or machine learning, which is an essential subset of the technology – seems to be everywhere. From the facial recognition software unlocking our phones or the personalization of our individual social media feeds to recommendation algorithms on platforms like Netflix or Spotify, it seems like AI has taken over our lives already. But the technology is still far from perfect and usually only works for particular tasks. So, for example, a specific algorithm might beat humans at poker – but the same smart algorithm won’t be able to tie the laces of a pair of shoes or tell a dog from a butterfly.

Smaller and smaller

While other AI systems often have the luxury of abundant resources like processing power or memory space, the team at ESR Labs has to be much more frugal. “We’ve been adamant from the very beginning that when we do AI, we do it the embedded way,” says Klaus, one of the engineers on the AI team. “The AI system currently runs on one of the bigger platforms within the car, maybe comparable to a good laptop,” he explains. “But for us, embedded AI means reducing and compressing further and further, so that in the end we’ll be able to put the algorithms on even the smallest embedded controllers that don’t even have their own GPU (Graphics Processing Unit).”

One method to do that is to make the AI unlearn unnecessary things it might have picked up along the way. “An AI is in many regards like a child,” says Wangxin, one of the AI engineers. “It picks up on everything it notices – the good things and the bad. And to make our embedded AI leaner and take up fewer resources, we have to make sure we remove the bad or unnecessary information.” At the same time, the developers have to give the AI enough training data so that it can cope with even the most unforeseen circumstances. “We have to be certain that the car will recognize another car or a cyclist,” explains Klaus. “But it also has to recognize rarer things, like an unusually formed tractor or a car with a trailer – or even something that has fallen off a trailer and is blocking the road.”

“We’ve been adamant from the very beginning that when we do AI, we do it the embedded way.” Klaus, software engineer and computer scientist

“We develop our own solutions and come up with new ideas before someone asks for them.” Martin, software engineer and computer scientist

“An AI is in many regards like a child, it picks up on everything it notices.” Wangxin, AI engineer

The fear of black boxes

For recognition tasks like this, it is common to use so-called semantic segmentation networks. These detect specific objects on a pixel level and can pass on information about what they believe these objects to be. Which pixels of the camera image belong to the road and which ones don’t? Are these two overlapping bicycles or a tandem? And is this a normal-sized car far away or a toy car very close to the camera? “Our current challenge is to do as much of this as best as possible using only cameras, without radar or lidar,” says Martin. “And we’re making very good progress.” This doesn’t mean, however, that ESR doesn’t know how to make use of other sensors like LIDAR, techniques like SLAM (Simultaneous Localization and Mapping), camera-LIDAR fusion, or many other things mere mortals can hardly pronounce and even less understand. Usually, the credo is: Make use of whichever technology brings the best results using the least resources.

One of the big recurring discussions in the field of AI is the fear of “black boxes”: algorithms that come to conclusions without the developers being able to understand what the conclusions are based on. Some hardliners claim the fastest and most significant leaps in the development of AI can only be achieved if developers no longer bother to understand everything that’s going on. But for Martin and his team, this doesn’t seem like a responsible approach in a field like autonomous driving where there’s so much at stake.

This setup – the NVIDIA Drive AGX Pegasus and an embedded microcontroller board (red) – processes data from the car’s sensors to map its surroundings.

Different ways of computer vision: Two members of the AI team watch a live visualization of the camera-perception demo.

The video image of one of the test vehicle’s two front-facing cameras is overlaid with 3D “bounding boxes” indicating different objects.

The result of the semantic segmentation: Each pixel is colored according to the object class determined by the neural network.

The result of the stereo algorithm: The brightness of a pixel corresponds to the physical distance to the corresponding point (lighter pixels are closer to the camera).

Between science fiction and
real-life problems

Another thing the AI team prides itself on is its focus on innovation and marching ahead. Martin, Klaus, Wangxin, and the other AI experts at ESR work on projects that go far beyond the assignments of car manufacturers and also outside the world of autonomous driving. “We don’t wait for our customers to hire us for certain things,” says Martin. “Instead, we innovate on our own and do lots of independent research. That way, we often develop our own solutions and come up with new ideas before someone asks for them.” Computer vision, machine learning, and AI could also be used for machines sorting and picking things in a factory, for example. Or for the growing field of predictive maintenance: automatically detecting an upcoming defect before it occurs, just by having an AI algorithm constantly monitor a machine. This can be especially helpful by big, hard-to-reach machinery like offshore wind turbines, for example, where human inspections and maintenance are expensive, but defects due to a lack of care are even worse.

It is precisely this combination that makes ESR’s AI unit special: being ahead of the curve while not losing touch. Coming up with new ideas on solutions – but on the other hand solving real-world problems. “It is important that our ideas are not completely wacky or flying off into the wild blue yonder,” explains Martin. “We have real, relevant problems in mind and develop solutions for these problems. And how cars can safely drive themselves is just one of the problems we’re working on.”

We use cookies to enable website functionality, understand the performance of our site and serve relevant content to you. More information: Privacy Policy and Cookie Policy.