One Finite Planet

One Finite Planet

Autonomous Cars: What’s needed and are Lidar & Maps Fools-Gold

Page Contents
Topics

Relevant Topics:

All Topics
More On This Topic

2023 may not the year to buy an EV, but it’s definitely too late to buy a new ICEV!

While there are still valid reasons to hesitate before buying an EV in 2023, it’s definitely late to buy a new ICE vehicle and a better time to instead plan a combustion engine exit.

Considering the trend from where EVs were 5 years ago to today, and then projecting EV sales growth over the next 5 years, and it seems almost certain almost all ICEVs of today are an obsolescence risk.

While this does not mean everyone should rush out and buy an EV, it is an opportunity to prepare for an aspect of the future that may catch some unawares.

Read More »

Software and tips for BYD Atto 3 + other BYD EVs.

Although this page is being updated to cover more topics time, it is launched with information on side loading Apps to the BYD Atto 3 prior to the 1.5 update in May 2023.

Rather than a typical webpaper, this page is really only of interest to owners of BYD vehicles.

See page contents for topics

Read More »

One pedal driving and regen brakes explained: reality, myths, hype, fads and Tesla vs the rest.

To make sense of all the often seemingly conflicting information on “regen“, one-pedal-driving, and how to best drive an EV, it really helps to know there are two different systems for how the “STOP pedal”, aka the “brake pedal”, to works in an EV:

  1. 1. Like an ICE vehicle, as with Tesla and perhaps some other EVs.
  2. 2. Using brake-by-wire as with most EVs.

Confusion over the two systems is part of it, but there are so many myths and so much misinformation about regen-braking and one-pedal-driving “regen braking”, and is why a low regen setting can be less efficient in a Tesla, but won’t matter and can help in practically all other EVs, and why “one-pedal-driving” is not necessarily the most efficient way of driving.

Despite the fact there is so many myths leading to so much misinformation making it sound complex, driving an EV for optimum efficiency is usually extremely simple.

Read More »

Why EV Battery size matters, and the problem with hybrids.

When you look deeper, battery capacity of an EV matters far more than you might think, as it effects not just range, but also battery life and vehicle power.

If a battery is quite small, as is usually the case with a hybrid (HEV), and even most plug-in hybrids (PHEVs), there will be limited total distance that can be driven “emissions free” before battery degradation, which is why use of fossil fuels is a necessity for most hybrids.

Read More »
All Topics

Page Contents

One image of a driverless cars is a Google car with a very prominent LIDAR device on the roof.

This is a look at the technologies and the needs of self drive systems, and where things are at in 2017.

There is logic to the suggestion by Elon Must that LIDAR is not needed, but there is also logic to suggesting that the steps forward form where we are to true autonomous may be a lot more challenging than it appears.

Autonomous Cars: What’s needed and are Lidar & Maps Fools-Gold

One image of a driverless cars is a Google car with a very prominent LIDAR device on the roof.

This is a look at the technologies and the needs of self drive systems, and where things are at in 2017.

There is logic to the suggestion by Elon Must that LIDAR is not needed, but there is also logic to suggesting that the steps forward form where we are to true autonomous may be a lot more challenging than it appears.

The path to autonomous driving.

A good starting point is a look at the technology in widespread use today, and just how it falls short of autonomous driving.

The next step is to consider what do we want to do that we can’t do now, and why.

That creates the picture of what we have, and where we want to go, leaving the task of mapping out a route to get there.

Driver Assist Technologies.

Sensors Behind the technologies: AI, Cameras, Sonar, Radar and Lidar.

The limitations of Radar, Sonar and Cameras.

Most modern vehicles, as of 2107, have Cameras, Sonar and Radar, but neither Lidar, nor AI beyond simple land detecting software.

Cameras generate an image that is just pixels. While by comparing frames, it could be possible to track movement, doing so requires AI, and is thus coved under AI. All that can be recognised from an image it patterns, such as lane markers or traffic signs. Vehicles vary to much in colour and contrast with the background, while lane markers and traffic signs have very clear and simple contrasts.

Sonar only works over short distances, but can accurately measure these short distances, with much greater precision than Radar. All that comes from a sonar sensor is how close is the closest object within its field of view. One distance, no ‘image’ of the environment.

Radar is very good at detecting motion, and the speed of that motion. Think of a really pixelated image of the car in front. So pixelated, that if you did no know it was moving, it would be hard to decide if it was a car or not. So radar sensors tell the car ‘I can see something, and I know the exact distance, and how fast it is moving’. When close enough, often withing 3 metres, some systems can determine the image appears to be a vehicle even if it was stationary when first detected. But even then, on the basis of radar alone, that would be a guess.

Radar sensors send out and receive back radio waves. Light shining a torch to see, but a radio torch. Radio waves are much longer wavelength than light, which results in images much lower resolution that what we can see with our eyes. Imagine a car as a really pixelated car. So pixelated, that it would impossible to be sure if it is a car or not from the image alone. But radar is far better than eyesight for measuring distance. Since radar ‘turns the torch on’ only in pulses, it can use time since the ‘torch was turned on’ to measure distance, it can tell how fast parts of the highly pixelated image are moving. Thus radar works on ‘I am guessing it is car because it is moving, and I know quite accurately how fast’.

Lidar.

Lidar is like radar, but with the resolution of a camera. Lidar on cars is going to send out ‘pulses’ of light, either visible light nor ultra violet is a good choice as they would be bad for humans, and infra red is the most common. Light night vision goggles, the images are normally monochrome. Not as clear as with visible light, but that can be corrected if combined with camera images by AI. What lidar adds to the image is depth, making object recognition much easier.

AI: Vision Systems.

We often say we see with our eyes, but that was all we did, all we would see is pixels. Instead, we see objects. For humans, the ‘intelligence’ of turning the pixels into objects comes from a whole lifetime of experience of what objects are. Seeing them from every angle. Turning image ages from radar, lidar and cameras into a 3D map of moving objects is quite a challenge that we do all day everyday, making it seem simple to us.

ACC: Adaptive Cruise Control

The concept is well explained elsewhere, but there are points on the limitations of using Adaptive Cruise Control.

The most common implementation uses doppler radars to measure the speed of objects, and subtracting the speed of the vehicle, by calculating of objects in the environment.

Most ACC systems are based on RADAR which provides a very low resolution picture of the world, but can detect the speed things are moving. For these systems all objects that have not detected to move are considered ‘ignored objects’ and just part of the background. Such objects are quite often described as ‘stationary objects’ in descriptions of system limitations, but for most systems, the ‘ignored objects’ are only those that have not been detected to move. If the vehicle ahead has been detected as moving, the system will not then ignore that vehicle when it stops.

This means that there is a vehicle ahead that is within range of the sensors, when that ‘vehicle ahead’ stops at traffic lights, the ACC system will correctly bring a car to a halt behind the ‘vehicle ahead’. However, if the ‘vehicle ahead’ changed lanes, making the new potential ‘vehicle ahead’ one that is already stopped, many ACC systems would not detect that new vehicle and adjust, leaving emergency braking as the only potential safe guard, and emergence braking would only respond as the last possible moment, creating an emergency. In practice, when the vehicle ahead has not been detected while it was moving, for most ACC systems the driver will need to intervene and apply the brakes.

A tree that has fallen across the road would also be an ‘ignored object’.

Overcoming this ‘never detected to move’ problem requires a system using not just radar, but higher resolution cameras or LIDAR systems, and AI capable of recognising objects. These systems can detect that an object is a vehicle, even if it has never been seen to move.

Lane Departure Warning.

This uses forward cameras to detect the lane. Lanes are not objects, but simple patterns, with lane markings designed to have good contrast and knowing the distance to the pattern is not required. Detecting lanes on an image requires far less image processing than detecting 3D objects. Even still, most systems only recognise ‘the lane’ under limited circumstances. There is significant time these systems cannot recognise where the lane is located.

The departure warning system simply provides a waring when it believes the vehicle is going to be steered out of the lane boundaries by the driver.

AEB: Automatic Emergency Braking.

Even without being able to detect lanes, the comes a point when it is clear the vehicle is not going to be steered around an object detected by forward facing radar. Unfortunately, without lane detection, it is never clear until it is, literally, an emergency. Even with lane detection, bringing the car to a halt every time there is something in the lane stop the car even when the driver was in full control and has a planned lane change. Collisions warning in such situations is annoying, false emergecny braking can be dangerous.

Remember that without sophisticated AI, systems have only a camera image that is just pixels with no distance information, and a radar image that is a very low resolution image. The choice is to apply the emergency brakes far too often, or not often enough or soon enough.

Lane Keeping Assist.

Using the same lane detection as lane departure warning, this adds providing steering input to stay within the detected lane, when there is a detected lane. Since more freeways and multiplane roads have quite clear lane markings, this can allow a car to steer itself on such roads.

Blind Spot Monitoring.

This normally uses radar. If there is something in the area of the field of view that is relevant, it does not matter what that object is, so radar with good speed detection, does the job well even though its image is very fuzzy.

Driver Monitoring.

More to be added, but these include:

  • steering wheel sensors to detect the driver has their hands on the wheel
  • analysis of lane centring consistency
  • cameras on the driver to detect attention to the road

Park Assist Sensors: Ultrasonic.

These use sonar sensors as described above to give a warning of the distance to the nearest object to a sonar sensor.

Just what is autonomous driving, and why bother?

What Problems Can Be Solved? What are the Goals?

Why are we bothering to have automated vehicles?  Answer: Because there are two problems with humans driving cars:

  • If cars can drive themselves, costs will be reduced and time spent driving can be saved.
  • We may be able to eliminate human driver errors that result in accidents, which can result in injuries and even fatalities.

 Saving the time and costs.

This objective is simply economic. Can we produce the technology at an viable price. Some may be concerned that replacing commercial drivers will reduce unemployment, or that non-commercial driving can actually be pleasurable.  These are complex moral and subjective arguments, and outside the discussion of this page today. From a point of view of solving the ‘problem’ the concept is easy: same a human being needed to do the driving.

Reduce Vehicle Accidents through human error.

Road Accidents constitute a serious issue.  Some stats association from safe international road travel (more on the site):

  • Nearly 1.3 million people die in road crashes each year, on average 3,287 deaths a day.
  • An additional 20-50 million are injured or disabled annually.
  • Road crashes are the leading cause of death among young people ages 15-29, and the second leading cause of death worldwide among young people ages 5-14.
  • Each year nearly 400,000 people under 25 die on the world’s roads, on average over 1,000 a day.
  • Road crashes cost USD $518 billion globally, costing individual countries from 1-2% of their annual GDP.

Ok, the problems are serious, but what is actually the cause?  Are humans just not equipped for driving?

Answer: Humans are fully capable of being good drivers, they are just not reliably applied to the task, or trained for all situations.

Humans are capable of driving, the causes of accidents are humans not fully utilising their capability through:

  • inattention
  • driver impairment (tired, alcohol, drugs etc.)
  • risk taking

Some road statistics can be found here. While searching for stats I found lots of opinions rather than stats, and most of these are distorted, but each category always breaks down into one of the three above.  (I will post more detailed analysis of this another time).

An alternative answer of ‘it was beyond my ability to be driving safely’ can be true for conditions with snow and ice…. but has to be combined with ‘and I was unable to determine that it was unsafe’ otherwise this can also be reduced to ‘risk taking’.

Imagine a leading racing car driver paying full attention to the task driving you around all day – and not racing, but driving to with a margin of safety and the goal of avoiding the risk of accidents! I suggest if every car could be driven to that level all of the time, and the driver never distracted or impaired, the goals would be realised.

The problem is not that humans lack the capability to drive safely, it is that the do not always use their abilities to drive safely.

What is the path to autonomous driving, that’s safer than now?

Matching the capabilities of a human driver.

The Requirements: Matching a fully attentive human.

Humans use:

  • Eyesight: visual sensors and image processing.
  • Hearing: audio sensors and auditory processing.
  • Intelligence: Object recognition and behaviour prediction.

The main sensor for human driving are eyes that can (with the aid of mirrors) detect an image from almost any direction around the vehicle.

Auditory sensing and processing are used to detect events not able to be seen at the time, such as an approaching emergency vehicle or an unusual event such as an accident.

Intelligence uses the information from the sensors to build mental model of all in the surrounding environment:

  • each vehicle and the expected behaviour of that vehicle.
  • the road, and road surface and any obstacles.
  • traffic signals, intersections, and hazards.

Potential Improvements: Reliability and Focus and Multitasking.

All statistics support that humans are perfectly (or almost perfectly) capable of driving a car, with in the current defined limits, when fully focused on the task and not impaired in some way, or taking risks outside prescribed boundaries.

The problem is reliability, not ability. If a car could self drive as well as well trained and attentive driver who is not impaired or distracted, the car could avoid almost all accidents.

So how do humans drive?

The key ability is the AI to determine what is in the image, how objects in the image are moving and will move in the near future.

To replicate unimpaired, non risk taking human drivers, we need simple sensors combined with advanced AI.

The current limitation is that we have enhanced sensors combined with extremely primitive AI.

Autonomous driving requires the same level of ability as a human to drive safely, but always with full attention, without impairment or risk taking.

Lidar/Radar and Maps are not substitutes for Intelligence.

Radar or Lidar.

I have never seen any accident analysis that concluded: ‘If the driver only had radar in addition to his eyesight, the accident never would have happened.’ Maybe it really has, but I suggest it would be rare.

The main difference between human eyes, LIDAR and RADAR, is the visual information from human eyes requires far more processing to extract the necessary information. To accurately determine distance, data from human eyes must be fully mapped into a complete picture of the environment. RADAR and LIDAR both provide distance information without the need to form a complete picture of the environment. The resulting trap is that working without that incomplete map of the environment means the ‘driverless car’ will normally be working with a less complete picture of the environment than cars with with human drivers. An accurate vision system can determine all that is needed without the addition of RADAR or LIDAR, although the addition of RADAR can be very helpful to deal with poor visibility such are fog.

Using LIDAR:

There is an object of approximately XZY shape moving at precisely speed A and direction B.

Human Eyesight (processed by human brain):

There is a guy in a badly maintained 2012 Chevrolet in the next lane and he is looking for an opportunity to change into my lane.

Which sensor is detecting the most useful information?

Certainly the LIDAR sensor requires far less intelligence to produce somewhat useful information than is required to process the two visual images reaching each human eyes, but the reward for deep image processing is significant, and is exactly what is often lacking in current autonomous mode systems.

Detailed Mapping Is not the holly grail.

One suggestion is that some levels of autonomous driving will be able to operate only within specific pre-mapped environments. The concept is that using exact vehicle GPS position data, the vehicle will be able to construct a complete picture of the environment from the data already on file. Of course, this also requires that all dynamic objects such as vehicles are reporting their position live to the system, so that they appear on the ‘map’.

Now just imagine what happens when there is an accident that leaves a damaged vehicle on the road that is too damaged to report its position? Or a fallen tree that does not report its position?

Level 4 autonomy provides for operation within a Geofenced area. However, in practice, this can only work if the car is able to detect any mismatch between the conditions that allowed the area to meet the geofence criteria, and current conditions. For example, a road could be included in the geofenced area because the lane markings all meet the criteria for driverless cars to be able to detect the marking. However, a car self driving under such conditions needs to be able to detect absence of lane markings, and required driver intervention until such markings are again present.

Managing the interim steps.

The interim challenge.

Roads with only autonomous vehicles are one thing, but they have to mix with conventional cars. If all other cars are autonomous and will avoid accidents, could I just always cut in and even drive though red lights knowing they would all just stop?

Could pedestrians just cross at any time and bring traffic to a halt? Part of driving is ‘etiquette’. Just following the rules is not enough, there is some pressure to drive in a manner acceptable to others. But what if the ‘others’ are robots?

Better Sensors Work Well as Assistance, but not for Autonomous Driving on road shared with human drivers.

RADAR and LIDAR are narrow focused sensors as the accurately determine very specific information about the environment.  Human eyesight is a far more general purpose sense, and determining even object size and distance from vision requires complex computation of many factors of the environment combined with a reference database of previous calculations and object pattern recognition learning, but the result is simply a far wider spectrum of data.

Humans make use of that far wider spectrum of data. While RADAR can perfectly accurately track the vehicle in front and is never distracted from that task, RADAR alone is poor data for predicting the vehicle in front is about the turn off, and thus is not breaking for an obstacle ahead of that car in front. Once the car in front has made the exit, the driverless system then has to deal with what is then revealed without planning in advance, and as a result the driverless car could then be on a collision course with whatever was previously in the RADAR ‘shadow’ and because of the vehicle that was until recently in front.

I have a car equipped with RADAR cruise. The system detects other vehicles not because it recognises the RADAR pattern of a vehicle, but because it detects movement of an object consistent with movement of a vehicle.  The system tends to track very few vehicles at any one time (it has the appearance of only tracking one vehicle), has very little data on what is being tracked and generally fails to recognise stationary vehicles. The system would crash into a vehicle if that vehicle has not be observed moving and I did not intervene.

Generally this system is a useful addition providing assistance, but is best used when the driver is fully aware of what the driver assistance ‘sees’, and more importantly will not ‘see’, so the driver knows what can be delegated, and what they as driver must assume is their role. The system is clearly not capable of stand alone operation.  The system does not pretend to be capable of such operation so this is no problem and can be used as a driving aid,  but a completely different approach with greater reliance on data derived from intelligent image processing is required to progress to stand alone operation.

Our regulations and entire road system is designed around humans. Extra capabilities such as RADAR can help with some tasks, but the system was not designed around these additional capabilities, it was designed around the combination of eyesight and ability to build a 3D picture of the environment that a human can build. Without a similar level of abilities, autonomous cars are not equipped to share that same system.

but without all of the information

Current Driver Displays are inadequate for progression to Autonomous.

Mercedes-Benz Distronic Plus

I have a display that shows only one other vehicle.  Hopefully internally, a full system would work with a map of all surrounding cars in a 360 degree view and be tracking what every other vehicle is doing.  Also, in a reproduction of what a driver would do, a system can also detect the stream of traffic ahead of the vehicle in front.  The current system I have displays none of this to me, but from what I have seen, neither does a tesla display.  If the system does have this other data, more should be displayed as we move on the journey to when drivers can be confident in their cars to drive autonomously.

tesla20autopilot201
Where is the display of all surrounding traffic including the line of cars ahead?

Conclusion: The current experience looks close to autonomous driving, but is fragile and unreliable.

A classic case of not realising what we do not know?  Current systems simply work with too little data. They need to apply far more AI, and despite adding radars and perhaps even lidars, are so far unable to reproduce what an attentive human driver can achieve using only their eyes and mirrors for sensors.

Taking a short cut to the ‘low hanging fruit’ of simplified data gets close, but is ultimately still provides less safely than attentive, focused human driver.   If there is the will, the technology can get there, but viewing all vehicles as identical and not bothering to build a full model of all surrounding vehicles and interpret their intentions can only fall short of a human driver.


https://insideevs.com/news/529098/tesla-fsd-musk-1000-10times/

Updates:

  • *to follow – more analysis on the road ahead, and technology updates.
  • *2022 Sept 14: Added more information on Adaptive Cruise Control and current systems.
  • 2017 Dec 16: original page.