One Finite Planet

One Finite Planet

Autonomous Cars: What’s needed and are Lidar & Maps Fools-Gold

Date Published:

The path to autonomous driving.

A good starting point is a look at the technology in widespread use today, and just how it falls short of autonomous driving.

The next step is to consider what do we want to do that we can’t do now, and why.

That creates the picture of what we have, and where we want to go, leaving the task of mapping out a route to get there.

Driver Assist Technologies.

Sensors Behind the technologies: AI, Cameras, Sonar, Radar and Lidar.

The limitations of Radar, Sonar and Cameras.

Most modern vehicles, as of 2107, have Cameras, Sonar and Radar, but neither Lidar, nor AI beyond simple land detecting software.

Cameras generate an image that is just pixels. While by comparing frames, it could be possible to track movement, doing so requires AI, and is thus coved under AI. All that can be recognised from an image it patterns, such as lane markers or traffic signs. Vehicles vary to much in colour and contrast with the background, while lane markers and traffic signs have very clear and simple contrasts.

Sonar only works over short distances, but can accurately measure these short distances, with much greater precision than Radar. All that comes from a sonar sensor is how close is the closest object within its field of view. One distance, no ‘image’ of the environment.

Radar is very good at detecting motion, and the speed of that motion. Think of a really pixelated image of the car in front. So pixelated, that if you did no know it was moving, it would be hard to decide if it was a car or not. So radar sensors tell the car ‘I can see something, and I know the exact distance, and how fast it is moving’. When close enough, often withing 3 metres, some systems can determine the image appears to be a vehicle even if it was stationary when first detected. But even then, on the basis of radar alone, that would be a guess.

Radar sensors send out and receive back radio waves. Light shining a torch to see, but a radio torch. Radio waves are much longer wavelength than light, which results in images much lower resolution that what we can see with our eyes. Imagine a car as a really pixelated car. So pixelated, that it would impossible to be sure if it is a car or not from the image alone. But radar is far better than eyesight for measuring distance. Since radar ‘turns the torch on’ only in pulses, it can use time since the ‘torch was turned on’ to measure distance, it can tell how fast parts of the highly pixelated image are moving. Thus radar works on ‘I am guessing it is car because it is moving, and I know quite accurately how fast’.

Lidar.

Lidar is like radar, but with the resolution of a camera. Lidar on cars is going to send out ‘pulses’ of light, either visible light nor ultra violet is a good choice as they would be bad for humans, and infra red is the most common. Light night vision goggles, the images are normally monochrome. Not as clear as with visible light, but that can be corrected if combined with camera images by AI. What lidar adds to the image is depth, making object recognition much easier.

AI: Vision Systems.

We often say we see with our eyes, but that was all we did, all we would see is pixels. Instead, we see objects. For humans, the ‘intelligence’ of turning the pixels into objects comes from a whole lifetime of experience of what objects are. Seeing them from every angle. Turning image ages from radar, lidar and cameras into a 3D map of moving objects is quite a challenge that we do all day everyday, making it seem simple to us.

ACC: Adaptive Cruise Control

The concept is well explained elsewhere, but there are points on the limitations of using Adaptive Cruise Control.

The most common implementation uses doppler radars to measure the speed of objects, and subtracting the speed of the vehicle, by calculating of objects in the environment.

Most ACC systems are based on RADAR which provides a very low resolution picture of the world, but can detect the speed things are moving. For these systems all objects that have not detected to move are considered ‘ignored objects’ and just part of the background. Such objects are quite often described as ‘stationary objects’ in descriptions of system limitations, but for most systems, the ‘ignored objects’ are only those that have not been detected to move. If the vehicle ahead has been detected as moving, the system will not then ignore that vehicle when it stops.

This means that there is a vehicle ahead that is within range of the sensors, when that ‘vehicle ahead’ stops at traffic lights, the ACC system will correctly bring a car to a halt behind the ‘vehicle ahead’. However, if the ‘vehicle ahead’ changed lanes, making the new potential ‘vehicle ahead’ one that is already stopped, many ACC systems would not detect that new vehicle and adjust, leaving emergency braking as the only potential safe guard, and emergence braking would only respond as the last possible moment, creating an emergency. In practice, when the vehicle ahead has not been detected while it was moving, for most ACC systems the driver will need to intervene and apply the brakes.

A tree that has fallen across the road would also be an ‘ignored object’.

Overcoming this ‘never detected to move’ problem requires a system using not just radar, but higher resolution cameras or LIDAR systems, and AI capable of recognising objects. These systems can detect that an object is a vehicle, even if it has never been seen to move.

Lane Departure Warning.

This uses forward cameras to detect the lane. Lanes are not objects, but simple patterns, with lane markings designed to have good contrast and knowing the distance to the pattern is not required. Detecting lanes on an image requires far less image processing than detecting 3D objects. Even still, most systems only recognise ‘the lane’ under limited circumstances. There is significant time these systems cannot recognise where the lane is located.

The departure warning system simply provides a waring when it believes the vehicle is going to be steered out of the lane boundaries by the driver.

AEB: Automatic Emergency Braking.

Even without being able to detect lanes, the comes a point when it is clear the vehicle is not going to be steered around an object detected by forward facing radar. Unfortunately, without lane detection, it is never clear until it is, literally, an emergency. Even with lane detection, bringing the car to a halt every time there is something in the lane stop the car even when the driver was in full control and has a planned lane change. Collisions warning in such situations is annoying, false emergecny braking can be dangerous.

Remember that without sophisticated AI, systems have only a camera image that is just pixels with no distance information, and a radar image that is a very low resolution image. The choice is to apply the emergency brakes far too often, or not often enough or soon enough.

Lane Keeping Assist.

Using the same lane detection as lane departure warning, this adds providing steering input to stay within the detected lane, when there is a detected lane. Since more freeways and multiplane roads have quite clear lane markings, this can allow a car to steer itself on such roads.

Blind Spot Monitoring.

This normally uses radar. If there is something in the area of the field of view that is relevant, it does not matter what that object is, so radar with good speed detection, does the job well even though its image is very fuzzy.

Driver Monitoring.

More to be added, but these include:

  • steering wheel sensors to detect the driver has their hands on the wheel
  • analysis of lane centring consistency
  • cameras on the driver to detect attention to the road

Park Assist Sensors: Ultrasonic.

These use sonar sensors as described above to give a warning of the distance to the nearest object to a sonar sensor.

Just what is autonomous driving, and why bother?

What Problems Can Be Solved? What are the Goals?

Why are we bothering to have automated vehicles?  Answer: Because there are two problems with humans driving cars:

  • If cars can drive themselves, costs will be reduced and time spent driving can be saved.
  • We may be able to eliminate human driver errors that result in accidents, which can result in injuries and even fatalities.

 Saving the time and costs.

This objective is simply economic. Can we produce the technology at an viable price. Some may be concerned that replacing commercial drivers will reduce unemployment, or that non-commercial driving can actually be pleasurable.  These are complex moral and subjective arguments, and outside the discussion of this page today. From a point of view of solving the ‘problem’ the concept is easy: same a human being needed to do the driving.

Reduce Vehicle Accidents through human error.

Road Accidents constitute a serious issue.  Some stats association from safe international road travel (more on the site):

  • Nearly 1.3 million people die in road crashes each year, on average 3,287 deaths a day.
  • An additional 20-50 million are injured or disabled annually.
  • Road crashes are the leading cause of death among young people ages 15-29, and the second leading cause of death worldwide among young people ages 5-14.
  • Each year nearly 400,000 people under 25 die on the world’s roads, on average over 1,000 a day.
  • Road crashes cost USD $518 billion globally, costing individual countries from 1-2% of their annual GDP.

Ok, the problems are serious, but what is actually the cause?  Are humans just not equipped for driving?

Answer: Humans are fully capable of being good drivers, they are just not reliably applied to the task, or trained for all situations.

Humans are capable of driving, the causes of accidents are humans not fully utilising their capability through:

  • inattention
  • driver impairment (tired, alcohol, drugs etc.)
  • risk taking

Some road statistics can be found here. While searching for stats I found lots of opinions rather than stats, and most of these are distorted, but each category always breaks down into one of the three above.  (I will post more detailed analysis of this another time).

An alternative answer of ‘it was beyond my ability to be driving safely’ can be true for conditions with snow and ice…. but has to be combined with ‘and I was unable to determine that it was unsafe’ otherwise this can also be reduced to ‘risk taking’.

Imagine a leading racing car driver paying full attention to the task driving you around all day – and not racing, but driving to with a margin of safety and the goal of avoiding the risk of accidents! I suggest if every car could be driven to that level all of the time, and the driver never distracted or impaired, the goals would be realised.

The problem is not that humans lack the capability to drive safely, it is that the do not always use their abilities to drive safely.

What is the path to autonomous driving, that’s safer than now?

Matching the capabilities of a human driver.

The Requirements: Matching a fully attentive human.

Humans use:

  • Eyesight: visual sensors and image processing.
  • Hearing: audio sensors and auditory processing.
  • Intelligence: Object recognition and behaviour prediction.

The main sensor for human driving are eyes that can (with the aid of mirrors) detect an image from almost any direction around the vehicle.

Auditory sensing and processing are used to detect events not able to be seen at the time, such as an approaching emergency vehicle or an unusual event such as an accident.

Intelligence uses the information from the sensors to build mental model of all in the surrounding environment:

  • each vehicle and the expected behaviour of that vehicle.
  • the road, and road surface and any obstacles.
  • traffic signals, intersections, and hazards.

Potential Improvements: Reliability and Focus and Multitasking.

All statistics support that humans are perfectly (or almost perfectly) capable of driving a car, with in the current defined limits, when fully focused on the task and not impaired in some way, or taking risks outside prescribed boundaries.

The problem is reliability, not ability. If a car could self drive as well as well trained and attentive driver who is not impaired or distracted, the car could avoid almost all accidents.

So how do humans drive?

The key ability is the AI to determine what is in the image, how objects in the image are moving and will move in the near future.

To replicate unimpaired, non risk taking human drivers, we need simple sensors combined with advanced AI.

The current limitation is that we have enhanced sensors combined with extremely primitive AI.

Autonomous driving requires the same level of ability as a human to drive safely, but always with full attention, without impairment or risk taking.

Lidar/Radar and Maps are not substitutes for Intelligence.

Radar or Lidar.

I have never seen any accident analysis that concluded: ‘If the driver only had radar in addition to his eyesight, the accident never would have happened.’ Maybe it really has, but I suggest it would be rare.

The main difference between human eyes, LIDAR and RADAR, is the visual information from human eyes requires far more processing to extract the necessary information. To accurately determine distance, data from human eyes must be fully mapped into a complete picture of the environment. RADAR and LIDAR both provide distance information without the need to form a complete picture of the environment. The resulting trap is that working without that incomplete map of the environment means the ‘driverless car’ will normally be working with a less complete picture of the environment than cars with with human drivers. An accurate vision system can determine all that is needed without the addition of RADAR or LIDAR, although the addition of RADAR can be very helpful to deal with poor visibility such are fog.

Using LIDAR:

There is an object of approximately XZY shape moving at precisely speed A and direction B.

Human Eyesight (processed by human brain):

There is a guy in a badly maintained 2012 Chevrolet in the next lane and he is looking for an opportunity to change into my lane.

Which sensor is detecting the most useful information?

Certainly the LIDAR sensor requires far less intelligence to produce somewhat useful information than is required to process the two visual images reaching each human eyes, but the reward for deep image processing is significant, and is exactly what is often lacking in current autonomous mode systems.

Detailed Mapping Is not the holly grail.

One suggestion is that some levels of autonomous driving will be able to operate only within specific pre-mapped environments. The concept is that using exact vehicle GPS position data, the vehicle will be able to construct a complete picture of the environment from the data already on file. Of course, this also requires that all dynamic objects such as vehicles are reporting their position live to the system, so that they appear on the ‘map’.

Now just imagine what happens when there is an accident that leaves a damaged vehicle on the road that is too damaged to report its position? Or a fallen tree that does not report its position?

Level 4 autonomy provides for operation within a Geofenced area. However, in practice, this can only work if the car is able to detect any mismatch between the conditions that allowed the area to meet the geofence criteria, and current conditions. For example, a road could be included in the geofenced area because the lane markings all meet the criteria for driverless cars to be able to detect the marking. However, a car self driving under such conditions needs to be able to detect absence of lane markings, and required driver intervention until such markings are again present.

Managing the interim steps.

The interim challenge.

Roads with only autonomous vehicles are one thing, but they have to mix with conventional cars. If all other cars are autonomous and will avoid accidents, could I just always cut in and even drive though red lights knowing they would all just stop?

Could pedestrians just cross at any time and bring traffic to a halt? Part of driving is ‘etiquette’. Just following the rules is not enough, there is some pressure to drive in a manner acceptable to others. But what if the ‘others’ are robots?

Better Sensors Work Well as Assistance, but not for Autonomous Driving on road shared with human drivers.

RADAR and LIDAR are narrow focused sensors as the accurately determine very specific information about the environment.  Human eyesight is a far more general purpose sense, and determining even object size and distance from vision requires complex computation of many factors of the environment combined with a reference database of previous calculations and object pattern recognition learning, but the result is simply a far wider spectrum of data.

Humans make use of that far wider spectrum of data. While RADAR can perfectly accurately track the vehicle in front and is never distracted from that task, RADAR alone is poor data for predicting the vehicle in front is about the turn off, and thus is not breaking for an obstacle ahead of that car in front. Once the car in front has made the exit, the driverless system then has to deal with what is then revealed without planning in advance, and as a result the driverless car could then be on a collision course with whatever was previously in the RADAR ‘shadow’ and because of the vehicle that was until recently in front.

I have a car equipped with RADAR cruise. The system detects other vehicles not because it recognises the RADAR pattern of a vehicle, but because it detects movement of an object consistent with movement of a vehicle.  The system tends to track very few vehicles at any one time (it has the appearance of only tracking one vehicle), has very little data on what is being tracked and generally fails to recognise stationary vehicles. The system would crash into a vehicle if that vehicle has not be observed moving and I did not intervene.

Generally this system is a useful addition providing assistance, but is best used when the driver is fully aware of what the driver assistance ‘sees’, and more importantly will not ‘see’, so the driver knows what can be delegated, and what they as driver must assume is their role. The system is clearly not capable of stand alone operation.  The system does not pretend to be capable of such operation so this is no problem and can be used as a driving aid,  but a completely different approach with greater reliance on data derived from intelligent image processing is required to progress to stand alone operation.

Our regulations and entire road system is designed around humans. Extra capabilities such as RADAR can help with some tasks, but the system was not designed around these additional capabilities, it was designed around the combination of eyesight and ability to build a 3D picture of the environment that a human can build. Without a similar level of abilities, autonomous cars are not equipped to share that same system.

but without all of the information

Current Driver Displays are inadequate for progression to Autonomous.

Mercedes-Benz Distronic Plus

I have a display that shows only one other vehicle.  Hopefully internally, a full system would work with a map of all surrounding cars in a 360 degree view and be tracking what every other vehicle is doing.  Also, in a reproduction of what a driver would do, a system can also detect the stream of traffic ahead of the vehicle in front.  The current system I have displays none of this to me, but from what I have seen, neither does a tesla display.  If the system does have this other data, more should be displayed as we move on the journey to when drivers can be confident in their cars to drive autonomously.

tesla20autopilot201
Where is the display of all surrounding traffic including the line of cars ahead?

Conclusion: The current experience looks close to autonomous driving, but is fragile and unreliable.

A classic case of not realising what we do not know?  Current systems simply work with too little data. They need to apply far more AI, and despite adding radars and perhaps even lidars, are so far unable to reproduce what an attentive human driver can achieve using only their eyes and mirrors for sensors.

Taking a short cut to the ‘low hanging fruit’ of simplified data gets close, but is ultimately still provides less safely than attentive, focused human driver.   If there is the will, the technology can get there, but viewing all vehicles as identical and not bothering to build a full model of all surrounding vehicles and interpret their intentions can only fall short of a human driver.


https://insideevs.com/news/529098/tesla-fsd-musk-1000-10times/

Updates:

  • *to follow – more analysis on the road ahead, and technology updates.
  • *2022 Sept 14: Added more information on Adaptive Cruise Control and current systems.
  • 2017 Dec 16: original page.

Table of Contents

Categories

Any transition from gas to EVs needs around 30 years.

In this polarised world, there seem to be two groups: those who want all vehicles to be EVs now, and those who feel EVs will never be a good idea.

Truth is, it would create a legal minefield and cost consumers and the environment heavily to ban too quickly, but bans will come.

This is an exploration of reality of a transition to EVs, which concludes any optimum transition takes around 30 years.

This conclusion means anyone wanting to reach EVs by 2050 needs to start very soon, and anyone worried all EVs should never happen, can take comfort that any environmentally sound transition will take a long time, although for economic reasons, does need to at least start soon.

Read More »

The 3rd EV wave: Tesla and the Chinese are coming, and in a hurry!

EVs can be divided into 3 waves:

  1. EVs that people can use despite limitations including the Tesla Model S, Nissan Leaf, i-MiEV, and Zoe.
  2. Then EVs are truly competitive in premium and enthusiast markets: Tesla Model 3, then Model Y.
  3. Mainstream EVs at the Tesla Model A/2 price point, but so far from China, not Tesla.

While China has been another world for EVs, with price competitive EVs from US$5,000, only now in 2022 is the rest of the world getting its first 3rd wave EVs, and so far, they all come from China.

Teslas were sold globally, but every other EV, was either an EV not sold in China, or a Chinese only EV. Two separate worlds of EVs, that have begun a hurried unification and bring 3rd wave EVs and automotive industry disruption to the entire world.

Read More »

EV Literacy: EV tech, AC, DC and Electric Motors and other stuff that’s different.

For almost 100 years, people have grown up in an age of the internal combustion engine. For many people, this has meant an understanding of engine capacity, cylinders, spark plugs, engine compression, crankshafts, valves, turbochargers, exhausts etc.

The bad news is that EVs mean so much that previously learned literacy is about to be consigned to history and replaced by EV charging, EV Range, batteries, permanent magnet and induction motors, regen braking, and other new terms.

The good news is, it is easy to build an EV literacy on those ICEV foundations, so there is no need to feel illiterate in this new EV world.

Read More »

A deeper look how EVs impact the power grid.

What is the real answer to how the grid will cope? First the answer from an actual power company, a link to one from a popular vlogger, both of which should placate most people, although neither is water tight as both skip over some details critical to the full answer.

And then, the deeper questions on what the impact will be on power bills and reliability, and to the transitioning of the grid to renewables.

Read More »

Decades long EV transition with no green quick fix.

Despite many claims that EVs have a dirty secret in that they are not really green at all, the real secret is how long it takes for the green payback. The claims are based on two realities: 1) the driving of EVs still results in emissions when EVs are charged from our current dirty electrical grids, and 2) the building of EVs also creates emissions, and sometimes increased emissions over building ICEVs.

However real data from critical studies shows that even in the worst case, overall, an high build emission inefficient EVs charged from a dirty grid still result in less emissions than ICEV. Just in that extreme case, only a marginal reducing in emissions!

However, already not all grids are “dirty grids” and as vehicles have an average lifespan of over 20 years either dirty grids will improve during that 20 years or we may have bigger problems. Build emissions from EVs largely track EV prices, and Wright’s law dictates both EV prices build emissions will soon fall below those for ICEVs.

The real conclusion from examining this question, is there is no quick fix green EV transition, but any delay in reducing production of ICE vehicles is creating a problem for the future!

Read More »

Solar Electric Vehicle Future: What you can, and can’t do with solar power.

There are solar boats and even a solar plane already, yet the solar vehicles we hear most about, solar cars, are as of early 2022, not yet on sale.

However, that is about to change with at least 3 solar powered cars nearing production. The situation with solar cars is almost identical to the early days of EVs.

Plus, the solar does not have to be built into the vehicle to enable solar powered travel. This page will be updated as solar vehicles come to market.

Read More »

Discover more from One Finite Planet

Subscribe now to keep reading and get access to the full archive.

Continue reading