AI

Framework for AI Self-Driving Driverless Cars: The Big Picture

By Dr. Lance B. Eliot, the AI Trends Insider

When I give presentations about self-driving cars and teach classes on the topic, I have found it helpful to provide a framework around which the various key elements of self-driving cars can be understood and organized. The framework needs to be simple enough to convey the overarching elements, but at the same time not so simple that it belies the true complexity of self-driving cars. As such, I am going to describe the framework here and try to offer in a thousand words (or more!) what the framework diagram itself intends to portray.

The core elements on the diagram are numbered for ease of reference. The numbering does not suggest any kind of prioritization of the elements. Each element is crucial. Each element has a purpose, and otherwise would not be included in the framework. For some self-driving cars, a particular element might be more important or somehow distinguished in comparison to other self-driving cars. You could even use the framework to rate a particular self-driving car, doing so by gauging how well it performs in each of the elements of the framework.

I will describe each of the elements, one at a time. After doing so, I’ll discuss aspects that illustrate how the elements interact and perform during the overall effort of a self-driving car.

At the Cybernetic Self-Driving Car Institute, we use the framework to keep track of what we are working on, and how we are developing software that fills in what is needed to achieve Level 5 self-driving cars.

D-01: Sensor Capture

Let’s start with the one element that often gets the most attention in the press about self-driving cars, namely, the sensory devices for a self-driving car.

On the framework, the box labeled as D-01 indicates “Sensor Capture” and refers to the processes of the self-driving car that involve collecting data from the myriad of sensors that are used for a self-driving car. The types of devices typically involved are listed, such as the use of mono cameras, stereo cameras, LIDAR devices, radar systems, ultrasonic devices, GPS, IMU, and so on.

These devices are tasked with obtaining data about the status of the self-driving car and the world around it. Some of the devices are continually providing updates, while others of the devices await an indication by the self-driving car that the device is supposed to collect data. The data might be first transformed in some fashion by the device itself, or it might instead be fed directly into the sensor capture as raw data. At that point, it might be up to the sensor capture processes to do transformations on the data.  This all varies depending upon the nature of the devices being used and how the devices were designed and developed.

D-02: Sensor Fusion

Imagine that your eyeballs receive visual images, your nose receives odors, your ears receive sounds, and in essence each of your distinct sensory devices is getting some form of input. The input befits the nature of the device. Likewise, for a self-driving car, the cameras provide visual images, the radar returns radar reflections, and so on. Each device provides the data as befits what the device does.

At some point, using the analogy to humans, you need to merge together what your eyes see, what your nose smells, what your ears hear, and piece it all together into a larger sense of what the world is all about and what is happening around you. Sensor fusion is the action of taking the singular aspects from each of the devices and putting them together into a larger puzzle.

Sensor fusion is a tough task. There are some devices that might not be working at the time of the sensor capture. Or, there might some devices that are unable to report well what they have detected. Again, using a human analogy, suppose you are in a dark room and so your eyes cannot see much. At that point, you might need to rely more so on your ears and what you hear.  The same is true for a self-driving car. If the cameras are obscured due to snow and sleet, it might be that the radar can provide a greater indication of what the external conditions consist of.

In the case of a self-driving car, there can be a plethora of such sensory devices. Each is reporting what it can. Each might have its difficulties. Each might have its limitations, such as how far ahead it can detect an object. All of these limitations need to be considered during the sensor fusion task.

D-03: Virtual World Model

For humans, we presumably keep in our minds a model of the world around us when we are driving a car. In your mind, you know that the car is going at say 60 miles per hour and that you are on a freeway. You have a model in your mind that your car is surrounded by other cars, and that there are lanes to the freeway. Your model is not only based on what you can see, hear, etc., but also what you know about the nature of the world. You know that at any moment that car ahead of you can smash on its brakes, or the car behind you can ram into your car, or that the truck in the next lane might swerve into your lane.

The AI of the self-driving car needs to have a virtual world model, which it then keeps updated with whatever it is receiving from the sensor fusion, which received its input from the sensor capture and the sensory devices.

D-04: System Action Plan

By having a virtual world model, the AI of the self-driving car is able to keep track of where the car is and what is happening around the car. In addition, the AI needs to determine what to do next. Should the self-driving car hit its brakes? Should the self-driving car stay in its lane or swerve into the lane to the left? Should the self-driving car accelerate or slow down?

A system action plan needs to be prepared by the AI of the self-driving car. The action plan specifies what actions should be taken. The actions need to pertain to the status of the virtual world model. Plus, the actions need to be realizable.

This realizability means that the AI cannot just assert that the self-driving car should suddenly sprout wings and fly. Instead, the AI must be bound by whatever the self-driving car can actually do, such as coming to a halt in a distance of X feet at a speed of Y miles per hour, rather than perhaps asserting that the self-driving car come to a halt in 0 feet as though it could instantaneously come to a stop while it is in motion.

D-05: Controls Activation

The system action plan is implemented by activating the controls of the car to act according to what the plan stipulates. This might mean that the accelerator control is commanded to increase the speed of the car. Or, the steering control is commanded to turn the steering wheel 30 degrees to the left or right.

One question arises as to whether or not the controls respond as they are commanded to do. In other words, suppose the AI has commanded the accelerator to increase, but for some reason it does not do so. Or, maybe it tries to do so, but the speed of the car does not increase.  The controls activation feeds back into the virtual world model, and simultaneously the virtual world model is getting updated from the sensors, the sensor capture, and the sensor fusion. This allows the AI to ascertain what has taken place as a result of the controls being commanded to take some kind of action.

By the way, please keep in mind that though the diagram seems to have a linear progression to it, the reality is that these are all aspects of the self-driving car that are happening in parallel and simultaneously. The sensors are capturing data, meanwhile the sensor fusion is taking place, meanwhile the virtual model is being updated, meanwhile the system action plan is being formulated and reformulated, meanwhile the controls are being activated.

This is the same as a human being that is driving a car. They are eyeballing the road, meanwhile they are fusing in their mind the sights, sounds, etc., meanwhile their mind is updating their model of the world around them, meanwhile they are formulating an action plan of what to do, and meanwhile they are pushing their foot onto the pedals and steering the car. In the normal course of driving a car, you are doing all of these at once. I mention this so that when you look at the diagram, you will think of the boxes as processes that are all happening at the same time, and not as though only one happens and then the next.

They are shown diagrammatically in a simplistic manner to help comprehend what is taking place. You though should also realize that they are working in parallel and simultaneous with each other.  This is a tough aspect in that the inter-element communications involve latency and other aspects that must be taken into account. There can be delays in one element updating and then sharing its latest status with other elements.

D-06: Automobile & CAN

Contemporary cars use various automotive electronics and a Controller Area Network (CAN) to serve as the components that underlie the driving aspects of a car. There are Electronic Control Units (ECU’s) which control subsystems of the car, such as the engine, the brakes, the doors, the windows, and so on.

The elements D-01, D-02, D-03, D-04, D-05 are layered on top of the D-06, and must be aware of the nature of what the D-06 is able to do and not do.

D-07: In-Car Commands

Humans are going to be occupants in self-driving cars. In a Level 5 self-driving car, there must be some form of communication that takes place between the humans and the self-driving car. For example, I go into a self-driving car and tell it that I want to be driven over to Disneyland, and along the way I want to stop at In-and-Out Burger. The self-driving car now parses what I’ve said and tries to then establish a means to carry out my wishes.

In-car commands can happen at any time during a driving journey. Though my example was about an in-car command when I first got into my self-driving car, it could be that while the self-driving car is carrying out the journey that I change my mind. Perhaps after getting stuck in traffic, I tell the self-driving car to forget about getting the burgers and just head straight over to the theme park. The self-driving car needs to be alert to in-car commands throughout the journey.

D-08: VX2 Communications

We will ultimately have self-driving cars communicating with each other, doing so via V2V (Vehicle-to-Vehicle) communications. We will also have self-driving cars that communicate with the roadways and other aspects of the transportation infrastructure, doing so via V2I (Vehicle-to-Infrastructure).

The variety of ways in which a self-driving car will be communicating with other cars and infrastructure is being called V2X, whereby the letter X means whatever else we identify as something that a car should or would want to communicate with. The V2X communications will be taking place simultaneous with everything else on the diagram, and those other elements will need to incorporate whatever it gleans from those V2X communications.

D-09: Deep Learning

The use of Deep Learning permeates all other aspects of the self-driving car. The AI of the self-driving car will be using deep learning to do a better job at the systems action plan, and at the controls activation, and at the sensor fusion, and so on.

Currently, the use of artificial neural networks is the most prevalent form of deep learning. Based on large swaths of data, the neural networks attempt to “learn” from the data and therefore direct the efforts of the self-driving car accordingly.

D-10: Tactical AI

Tactical AI is the element of dealing with the moment-to-moment driving of the self-driving car. Is the self-driving car staying in its lane of the freeway? Is the car responding appropriately to the controls commands? Are the sensory devices working?

For human drivers, the tactical equivalent can be seen when you watch a novice driver such as a teenager that is first driving. They are focused on the mechanics of the driving task, keeping their eye on the road while also trying to properly control the car.

D-11: Strategic AI

The Strategic AI aspects of a self-driving car are dealing with the larger picture of what the self-driving car is trying to do. If I had asked that the self-driving car take me to Disneyland, there is an overall journey map that needs to be kept and maintained.

There is an interaction between the Strategic AI and the Tactical AI. The Strategic AI is wanting to keep on the mission of the driving, while the Tactical AI is focused on the particulars underway in the driving effort. If the Tactical AI seems to wander away from the overarching mission, the Strategic AI wants to see why and get things back on track. If the Tactical AI realizes that there is something amiss on the self-driving car, it needs to alert the Strategic AI accordingly and have an adjustment to the overarching mission that is underway.

D-12: Self-Aware AI

Very few of the self-driving cars being developed are including a Self-Aware AI element, which we at the Cybernetic Self-Driving Car Institute believe is crucial to Level 5 self-driving cars.

The Self-Aware AI element is intended to watch over itself, in the sense that the AI is making sure that the AI is working as intended.  Suppose you had a human driving a car, and they were starting to drive erratically. Hopefully, their own self-awareness would make them realize they themselves are driving poorly, such as perhaps starting to fall asleep after having been driving for hours on end. If you had a passenger in the car, they might be able to alert the driver if the driver is starting to do something amiss. This is exactly what the Self-Aware AI element tries to do, it becomes the overseer of the AI, and tries to detect when the AI has become faulty or confused, and then find ways to overcome the issue.

D-13: Economic

The economic aspects of a self-driving car are not per se a technology aspect of a self-driving car, but the economics do indeed impact the nature of a self-driving car. For example, the cost of outfitting a self-driving car with every kind of possible sensory device is prohibitive, and so choices need to be made about which devices are used. And, for those sensory devices chosen, whether they would have a full set of features or a more limited set of features.

We are going to have self-driving cars that are at the low-end of a consumer cost point, and others at the high-end of a consumer cost point. You cannot expect that the self-driving car at the low-end is going to be as robust as the one at the high-end. I realize that many of the self-driving car pundits are acting as though all self-driving cars will be the same, but they won’t be. Just like anything else, we are going to have self-driving cars that have a range of capabilities. Some will be better than others. Some will be safer than others. This is the way of the real-world, and so we need to be thinking about the economics aspects when considering the nature of self-driving cars.

D-14: Societal

The societal aspects also impact the technology of self-driving car. For example, the famous Trolley Problem involves what choices should a self-driving car make when faced with life-and-death matters. If the self-driving car is about to either hit a child standing in the roadway, or instead ram into a tree at the side of the road and possibly kill the humans in the self-driving car, which choice should be made?

We need to keep in mind the societal aspects will underlie the AI of the self-driving car. Whether we are aware of it explicitly or not, the AI will have embedded into it various societal assumptions.

D-15: Innovation

I included the notion of innovation into the framework because we can anticipate that whatever a self-driving car consists of, it will continue to be innovated over time. The self-driving cars coming out in the next several years will undoubtedly be different and less innovative than the versions that come out in ten years hence, and so on.

Framework Overall

For those of you that want to learn about self-driving cars, you can potentially pick a particular element and become specialized in that aspect. Some engineers are focusing on the sensory devices. Some engineers focus on the controls activation.  And so on. There are specialties in each of the elements.

Researchers are likewise specializing in various aspects. For example, there are researchers that are using Deep Learning to see how best it can be used for sensor fusion. There are other researchers that are using Deep Learning to derive good System Action Plans.  Some are studying how to develop AI for the Strategic aspects of the driving task, while others are focused on the Tactical aspects.

A well-prepared all-around software developer that is involved in self-driving cars should be familiar with all of the elements, at least to the degree that they know what each element does. This is important since whatever piece of the pie that the software developer works on, they need to be knowledgeable about what the other elements are doing.

This content is originally posted on AI Trends.

Let’s block ads! (Why?)

AI Trends

Comments
To Top