loader

Episode II on AI in Automotive: Operationalizing ML in Autonomous Vehicles

In the previous installation of this series, The World of Autonomous Vehicles, we surveyed the levels of autonomy, the sensors driving autonomous vehicle applications, and the current market. In this iteration, we will dive deeper into the machine learning algorithms used to deploy autonomous vehicles and how the data is transformed and fused to feed the on board perception systems. 

Information is a necessary fuel to drive autonomous vehicle (AV) applications. With multiple sensors supplying a continuous feed to the on-board computer, data can quickly engulf memory stores and bog down decision making processing. According to a former Intel CEO Brain Krzanich, self driving vehicles will generate and consume about 40 terabytes of data for every eight hours they drive. So essentially, AVs are consuming data at unprecedented rates, but with so much information, how can an AV quickly consume so much data and turn them into accurate measurements of its environment that supplement the machine learning in the AV? By fusing the masses of data generated by the sensors, the system can feed a representative environmental model into the object detection, object identification, and decision making algorithms that will actuate the mechanics of the AV. 

Within an AV application, data is not scarce. With the Lidar contributing 45 gigabytes of data, the stereo camera generating 2 Terabytes per hour and a single radar sensor generating 360 megabytes of data every hour, a singular AV is hungrier than ever. Additionally, each sensor is not created equal and they may have different strengths and weaknesses. 

(image courtesy of towards data science’s article, Sensor Fusion)

While the radar may track an incoming object at 12 m/s the camera may track the object at 10 m/s. To create a reliable and robust environmental model, AVs rely on sensor fusion to reduce the sensor noise in the data and create an accurate model of the environment around them. The most common sensor fusion algorithm is the Kalman Filter. A Kalman Filter works in three distinct steps:

  1. Predictions: By utilizing kinematic equations and prior positioning measurements, the system predicts where the vehicle will be and the velocity of the vehicle in the next unit of time. Relying on matrix operations, the system is able to optimize compute power. 
  2. Measurements: In this step, the filter is actually employed. When the sensors obtain real-time measurements of the vehicle, the filter joins and compares the measurements against the most recent predictions. Next, the filter uses a Gaussian model to decide the final state of the vehicle and the sensor value with the least uncertainty is chosen. 
  3. Update: Lastly, the final vehicle state is updated with the values generated in the previous step. While the values computed prior do not completely replace the state measurements of the vehicle, they are added such that the model leans more towards the predicted value.

With the filtered data, the decision-making algorithms are supplied with a better representation of their environment. 

Machine learning (ML) is fundamental to any AV application. By using a predefined set of learning rules, the algorithms are much more resilient to scaling. Additionally, relying on training data makes the algorithms much more prepared to adapt to new environments without the assumption of a deterministic world. In the true sense of autonomy, ML can evolve without any human intervention. Granted, the ML in AV applications have a very daunting task. From recognizing a stop sign to making split second decisions that can protect the passengers inside, replicating and automating driving tasks conventionally reserved for humans is no easy mission. On the ground level, AVs must be able to be able to detect and classify the objects in its path with a very high accuracy. If the ML interprets a stop sign as a green signal light, the consequences may be dire. Typically, most AV applications utilize Advanced Driver Assistance Systems (ADAS) to classify the objects in their environment.

ADAS leans more on the image data, ingesting and processing snapshots of the environment to detect objects, road signs, pedestrians, lanes and potential collisions. Conventionally, computer vision in AV uses Gaussian Blurs and Canny filters to extract the edges from an image (see below example). With the edges clearly outlined, the computer can differentiate the boundaries of a three dimensional object from a two dimensional input and the data noise that may arise from textures in the environment are omitted. From these boundaries, the computer vision can fit line segments around the edges and better classify the object. Lastly, the line segments are analyze and algorithms such as support vector machines (SVM), histograms of oriented gradients (HOG) and principal component analysis (PCA), K-Nearest Neighbors (KNN), and Bayes decision rule are utilized to recognize patterns in the fitted boundaries.

Original Grayscale Image to Omit Color Channels 

Canny Filtered Image

(Original Image and Canny image courtesy of jeffirion’s github, Udacity Self-Driving Car Nanodegree — Project 1 )

Additionally, the high speed demands of traffic may skew the data obtained from the sensors so it’s important that in the case of low quality images, the ML can still detect and locate objects. AVs make efficient use of clustering algorithms to recover from spotty data. Much like ADAS, these clustering algorithms work to classify the objects in the AV’s environment. Given the inconsistencies in the data, algorithms such as K-Means and Multi-class Neural Networks employ centroid-based and hierarchical learning methods to uncover obscure structures. As previously mentioned, classification is one of the most crucial steps in autonomizing driving and these algorithms act as a redundancy supporting the computer vision.

Moreover, the ML needs to make decisions. With the predictions from the aforementioned algorithms, the ML needs to decide when to turn and how much to turn and when to accelerate and decelerate. These algorithms are crucial to piloting the AV and by surveying the confidence levels of the previous predictions, the AV is able to supply a directional confidence in the environment and survey the relationships between predictions. Using Gradient Boosting (GDM) and AdaBoosting, the AV is able to combine all the decision making models to make predictions with a low error rate. Generally, boosting algorithms combine multiple lower accuracy models together to create a singular strong learning rule. In every weak model generation step, the boosting algorithms ranks each model and identifies false predictions. With each epoch, the singular model is updated and over time the model becomes more precise. 

( Boosting Visualization image courtesy of Edureka’s article, A Comprehensive Guide To Boosting Machine Learning Algorithms)

Jaguar Land Rover predicts that Level 5 AV applications will implement roughly one billion lines of code and as ML algorithms continuously improve, AVs will become more of a reality. While most AV applications are still in their developmental stages, the generational reliance on AI and ML will drive these applications. 

As sensors progress to better feed the on-board systems and supply the actuating algorithms, the quality and quantity of the data generated will only increase. However, with a single AV generating Terabytes of data every hour, companies are faced with the challenge of collecting and storing all the data from every single AV. Creating a fleet of connected AVs vastly scales the amount of data to be managed, and with AVs coming market penetration, the expected quantity of data is intimidating. But, by consolidating all AV data into a singular domain, companies can extend the range of their road analysis and diversify performance in all conditions. 

At a larger scale, AV applications require a robust middleware capable of effectively handling immense storage and processing demands with fast throughput. With the ability to handle millions of pub/sub topics with zero message loss and high throughput, Pandio can efficiently create an architecture capable of achieving the ML demands of any AV application. Additionally, by separating storage and compute domains, Pandio can accelerate the rate at which data is turned into informative metrics. To learn more about Pandio as an AI data management solution, please check out our product page!

In the final installment of this article, we will survey conventional data management architectures and the need for distributed messaging platforms in the AV space as a solution to creating a fleet of connected vehicles.

Leave a Reply