Self-driving technology has made LIDAR a hot topic, but there are a lot of myths. So here’s one resource for how LIDAR works and the “pros and cons” of each of the technologies for self-driving: LIDAR, radar, cameras and sonar.
Hopefully, saving the need for repetition in upcoming videos about self-driving and what’s required and who’s up to what point in getting to fully automated self-driving.
This is the One Finite Planet EV video on the sensors for self-driving.
This is the One Finite Planet EV video on the sensors of self-driving, which have far wider implications and play an increasing role in how AI and robots form their world view and are revolutionizing areas as diverse as from how mobile phones process images through to evaluating tree canopy coverage and health of the environment.
There are some other sensors to be covered in other videos, but this relates to these sensors that operate at a distance.
Priniciples:
Pings & Echoes
LIDAR and radar are so similar that in fact the best description of LIDAR is to call it microwave radar. It doesn’t help if uncertain about what radar is. And perhaps the best way to understand how all three of LIDAR, radar and sonar work is to start with sonar and start by considering how a blind person navigates by making clicks and hearing the echoes.

I’ll try and link a video below, but the general principle is the person can point in a certain direction, make a click, hear when the echo comes back and work out how far away the closest thing is. Turn the head, make a click in a different direction and they can build up a bit of a picture quite low resolution of the world around them and what objects are close.
Of course sonar is using vibrations or sound. Do we call it sound when the vibrations are only in the range we hear or do we call all vibrations sound? I think that varies from person to person and the situation, but you get the idea.
LIDAR & RADAR: Photon echoes.
LIDAR, radar and cameras all use photons or the electromagnetic radiation. Do we call all electromagnetic radiation light or only the frequencies that we can actually see?
Once again, that varies, but the concept is the same. It’s all the same thing with a key difference being the frequency of the photons or light that is in use. The sonar sensors in cars work at a frequency the humans can’t hear so it doesn’t disturb us. They work at around 40 kilohertz which is around double the frequency we can actually hear. It’s still interestingly enough within the frequency dogs can hear, but just like the blind person’s clicks aren’t that annoying to us humans, the sonar is actually only making little clicks and most of the time making no sound at all, so it’s not that annoying to dogs either.
| Technology | Waves | Frequency / Wavelength | Freq/Wave. in cars. |
| Ultrasonic | Sound/Vibrations. | 20kHz .. 500kHz | 40kHz |
| mm wave Radar | “Light”/Photons | 30GHz .. 300GHz | 76GHz .. 81GHz |
| Lidar | “Light”/Photons | 25,000 GHz .. 1.2 PHz ~12,000nm .. 250nm (IR, visible light, UV) | 331 THz / ~905nm (Note1) ~194 THz / ~1550nm (Note2) |
| Cameras | Visible light/Photons | 400,000GHz ..790,000 GHz ~700nm .. ~400 nm | Same |
Notes: (On screen in video, not in transcript)
905nm (near infrared):
- Technology: This wavelength is emitted by inexpensive and compact semiconductor lasers, similar to those in laser pointers but more powerful as although still harmful, it is less harmful to human eyes than visible lasers.
- Advantages: Lower cost, smaller form factor. Silicon-based detectors used at this wavelength are also very affordable and effective.
- Disadvantages: This wavelength is closer to visible light and is a higher energy photon. It poses a greater potential risk to human eyesight, which limits the maximum optical power (and thus the range) allowed for eye safety regulations.
- Typical Range: Up to 100-200 meters for most automotive and industrial applications.
1550nm (Short wave infrared):
- Advantages:
- Eye Safety: The human eye’s cornea and lens absorb light at 1550 nm much more effectively than at 905 nm, preventing it from reaching and focusing on the retina. This allows LiDAR systems to use much higher laser power without risking eye damage.
- Longer Range: The higher permitted power enables significantly longer detection ranges, of 250 metres to 300 metres for flash lidar, and up to 10km or even 20km single pointer type beams used by scanning lidars.
- Better Performance in Weather: 1550 nm light tends to penetrate light rain, fog, and dust slightly better than 905 nm.
- Disadvantages: Higher cost and larger system size. Detectors for this wavelength (e.g., Indium Gallium Arsenide – InGaAs) are more expensive than silicon detectors.
Short summary:
Note 1: 905mm Lidar lasers use small low cost silicon components, but safety requirements for this frequency limit range to from 100 to 200 metres.
Note 2: 1550 nm radar requires larger lasers and receivers using more expensive InGaAs components can better penetrate rain, fog, & dust + for flash lidar have a range of 300 metres & for scanning lidar a range of 10km or even 20km.
The radar used in cars is typically millimetre wave radar which describes everything from 30 to 300 gigahertz, but in cars it’s typically in the 76 to 81 gigahertz range which has an approximate wavelength of around about 4 millimetres.
LIDAR typically uses photons in the near infrared range. In fact almost all cars use either 905 [] nanometre which are in the 200 gigahertz to 1000 gigahertz near infrared range or 1550 nanometre. Cameras use photons in the visible spectrum which is around about 400 to 700 nanometre wavelength or 400 000 gigahertz to 790 000 gigahertz in frequency.
As you can see there’s a bit of a gap but not a huge gap between the lowest frequency of light we can see and that higher frequency of LIDAR that’s in use.
Why wavelength/ frequency matters:
This means that that higher frequency light is just below frequency of visible light, which does restrict it because it can still be picked up by human eyes.
Not seen, but it can still damage the retina or damage the eyes if a strong beam hits human eyes. The slightly lower frequency, has lower resolution because of being lower frequency, but can be sent at a higher level because it won’t be picked up by human eyes.
As frequency decreases, wavelength increases and resolution decreases.
So once you get down to the level of radar, you end up with those blurry pictures, even with high resolution radars, images such as those taken by the US Defence Department of UFOs, or UAPs as they call them, which end up being quite blurry despite the equipment being very high level.
Impact of sensors on car design:
The other impact of changing frequency is that the photons can move through different material, enabling going through fog and even going through, once you get low enough, going through walls.
| Sensor | Cover material |
|---|---|
| Camera | Glass transparent to our eyes – laminated soda-lime glass used in windscreens or other windows. |
| Radar | Non-metallic, typically plastic panels or badges |
| 905nm Lidar (flash) | Polycarbonate (often like purplish or bluish looking glass), but not windscreen glass. |
| 1550 nm Lidar (flash) | Fused silica or Borosilicate glass, impractical for an entire windscreen for reasons including cost, safety & manufacturing |
| 1550 nm Lidar (scanning) | As above, but mounted to a allow a wide view (typically 360°). |
| Ultrasonic | Mounted flush with car surface and covered by paint. |
But same time, this also can make things invisible and changes even what can be used as a housing or over the front of the higher frequency lidar up. You end up with a purplish, bluish looking, almost glass like covering, although, optical glass, as used by windscreen, tends to be quite opaque to even this frequency. So the LIDARs can’t normally be mounted inside the vehicle and look through the windscreen.
The higher 905 nanometer frequency for LIDAR particularly bad at going through most glass and you end up with covers that are polycarbonate or acrylic and have an almost dark blue purplish look. Then for the 1,550 nanometer LIDARs, you have a different covering again, which can be specialised glass like a germanium or something,
But in the end, the main thing is that you can’t just use optical glass
Interference?
The other property of both LIDAR and radar that’s important is their one very specific frequency, effectively produced with the equivalent of a laser, even if they’re spread out by a lensing system. Being one specific frequency with a slight random variation between that used at different times allows multiple cars to be all on the highway sending out their pings, like flashes of a flashbulb, without all interfering with each other. Being randomly allocated frequency and randomly allocating the time between pulses. This can allow for as many as 400 cars to be all cruising down the highway, sending out their little pings or flashes, and not jam each other, or interfere so the whole system can still work.
Using the data: Building a world view
Raw Data vs World View.

But all this is just how the sensors work, yet what is more important is how the data from the sensors can then be used to build a view of the real world. How we build a world view and what camera systems try to emulate is turn data from visible light sensors into a model of the real world. Picture the room shown here, this is just an AI image and there’s no real room but still our brain turns the dots that our eyes detect in the cone sensors that are arranged as dots all around the eye into an image with not dots but surfaces, and in fact those surfaces are translated into objects. Everything we see when we look around is converted from the dots our eyes see by our brain into an object. That object has attributes, we see a table, we have characteristics, can we lean on the table? All of these things we assume based on our knowledge of those objects. There’s chairs, can we sit on the chair? Can we pick up a chair? What would happen if we walked into a chair? All of those things come not from the image but from the data in our brain that we reference just from the dots that our eyes detect. Even with just an image without any depth information, the world view our brain creates includes depth. Our brain is only able to do this accurately because it recognises every object, enabling determining relative size and distance. This is only possible with images where there is an understanding of every object and things like Escher drawings illustrate how there are some limitations.
It’s said that we only see at any one time an image about the size of our thumbnail held at arm’s length yet we build up this complete world view around us from the sensors in our eyes.

Colour Imaging: The Richest data source.
Our eyes work with three wide colour bands. These colour bands overlap. There’s not even clear red, green and blue that are separate. As you can see on the chart here, the red has actually two peaks on it at different frequencies so that we can see the red and green combination as one colour yet the red and blue has more extreme in high frequency than just blue by itself or blue in combination with green because we actually never see blue truly by itself.
The key here is it’s not just the raw data from the sensors, it’s the ability of the computing systems to turn that raw data into a picture of the real world. That determines how well sensors can work in allowing artificial intelligence to build a view of the real world for self-driving, for robots or for anything else. And that’s why all these sensors that we’re building into robot cars and robot everything are becoming more and more relevant to what’s going to shape the world in coming decades. Now there’s a philosophy we can rely entirely upon visible light because the human mind relies almost entirely on visible light and there’s a whole lot of other sensors as listed here but the most important in finding our way around for most of us is visible light and we manage to do it quite well. So can cars do that or could we have even better navigation by adding sensors that humans don’t have allowing to do things even better than humans do?
From data to building a world view.
(ultra diagram)
So how do these sensors build their picture of the real world? Starting with the most simple the ultrasonic sensors which really only get one reading: how far away the nearest thing is. But there’s a grid of ultrasonic sensors or a line at the front and a line at the back of the car enabling combining the data from different sensors. This means that you can do triangulation if you see a signal in two sensors then it could be between the two or it could be two different objects. How do you work out which is which? One of the main ways to differentiate is to also use the fact that you have another sense which is how far the car moves so you can combine the movement of the car with a change in distance to the object and then by some geometric calculation of basically Pythagoras theorem you can calculate that the object must be at an angle because if it was straight behind for every centimetre closer you’d get the object would become a centimetre closer. If it’s at an angle the steeper the angle the smaller the change in the notice distance will be because a large amount of the distance becomes how far the object is to the side of the sensor instead of directly behind the sensor. So combining the fact that you have several sensors and some other data allows building a better picture of the world it’s not a great one but it’s enough to give an audible warning of close objects when parking. Now that same audible warning can feed into the system’s computers that have other data to enable a total parking solution when driven by the car itself but you’re not really going to park a car just on those sensors which is where we get into LIDAR, RADAR and combine it with the data from cameras.
(background noise)
Again all: ultrasonic LIDAR and RADAR send out clicks. LIDAR and RADAR are basically the same apart from the frequency. Now that frequency does make quite a difference to how much resolution you can get, what materials the clicks that are sent out can pass through, and how far you can get the clicks to run and detect the background.
Checking clicks always works on the basis that there can be that type of signal already present in the environment but you expect a jump in the size of the signal in response to the click coming back so that if there’s sound at 40 kilohertz you’ll still hear a jump in the amount of 40 kilohertz following the click going out from the ultrasonic sensor. You’ll also, even though there’s the infrared is always present in the environment a LIDAR will see a jump in the amount of that frequency directly corresponding to the reflection of the clicks. As with ultrasonic sensors the idea is to build an image not just at one point but of the complete environment.
So how do you build an image that’s much more like the one we get from our eye where there’s a whole lot of recorded points?
Scanning
The ultrasonic sensor for each individual sensor only give a single point reading. A single point reading for a LIDAR or a RADAR will require a whole lot of LIDARs and RADARs to build up a world view. If using LIDAR and RADAR you’re going to require a whole lot of LIDARs and RADARs to build up. Now that is actually done with both LIDAR and RADAR. Cars have an array of LIDARs around the car and both LIDAR and RADAR systems sometimes send out a whole lot of beams all at once. They typically also use those beams sent out all at once to scan which is the next technique. The scanning technique is where you move the beam. By moving the beam you can have a very focused beam very much like a laser beam used in a distance measurement sensor. Imagine a laser beam as used in a distance measurement sensor like one I have here.
Imagine pointing the dot of the distance measurement sensor at one point and then scanning or moving the sensor gradually across the scene in front of you. Then the next line, then the next line. Well that approach is a scanning approach. It means the beam, because it’s focused, can work at a very long distance and can build up an image of something a long distance away but it’s slow and it takes a long time to scan. The next step is to actually have several of those beams so you’re actually scanning more than one. Now this technique is the way the LIDARs that look like a fez hat on top of a car or an emergency beacon that actually you can’t see flashing that’s mounted on top of some self-driving car. This scanning system of LIDAR does allow picking up things at a greater distance.
The other approach is instead of sending out one dot send out a broader beam cover a whole area at one time. Then you’ve got to analyse the data coming back in order to work out what’s how far away where is it in the image. This has two disadvantages. One is that the further away the object you hit the weaker the beam will be because it’s spreading out as it goes forward.
Distance + Direction + Doppler (diagram)
| Feature | Radar “Scanless” Imaging (MIMO/SAR) |
|---|---|
| Mechanism | Measures time delay (range) and phase shift (velocity/angle) of reflected radio waves. |
| Output | A 3D/4D point cloud (Range, Azimuth, Elevation, Velocity). |
| “Pixels” | Virtual points calculated by a processor using data from an array of antennas. |
| Real-time | Requires significant computation to “form the image” from the raw data. |
The second problem is in the data coming back where did the actual reflection of the echo come from. Well there are two ways of working that out.
One as used in simple speed radars is to work out the angle that the data comes back at a given time. So a speed radar works on a couple of things. The echo of the clip coming back will be changed in frequency with the Doppler effect giving the speed. The second thing is it can actually measure the angle from which the reflected signal is received. Now these simple radars only record one angular piece of data at a time so if there are two cars traveling at identical speed coming towards it it’s difficult for it to determine but if there are two speed cars traveling at identical speed then both of them are either speeding or not speeding. If the cars are traveling at different speed then you’ll get two different signals at different frequencies and you can work out which one is the speeding car.
You can improve this by collecting multiple echoes that will result from reflections from different objects at different distances recording angle speed and distance for each of different moving objects at different speeds and or distance. This does collect multiple points and is also how most millimetre wave car radars work but it’s a very simplistic view of the world that recognizes the speed and distance of objects in motion but not what those objects are.
Imaging – ToF Sensors.
ToF Image Sensors (The Core Technology)
The sensor is a 2D array of pixels, where each pixel is independent and can measure how long it takes for the laser light to travel to an object and back. There are two main technical approaches to achieving this:
ToF type 1: Direct Time-of-Flight (dToF) Sensors (as used for self driving)
- How they work: They directly measure the time delay between the emission of the laser pulse and the detection of the returned photon. Extremely precise timing circuitry within each pixel is required.
- Key Technology:Single-Photon Avalanche Diodes (SPADs) or Avalanche Photodiodes (APDs) for each pixel.
- SPADs are so sensitive they can detect a single photon. When a photon hits the sensor, it triggers a massive, easily detectable electrical avalanche.
- The timing circuitry (a Time-to-Digital Converter or TDC) records the exact time this event happened. By repeating this process over thousands of rapid laser pulses, the sensor builds a precise histogram of return times for each pixel, calculating the distance with high accuracy.
- Advantages:
- Very accurate and capable of very long ranges.
- Less susceptible to multipath interference (from light bouncing off multiple surfaces).
- Performance is good in ambient light because of triggering by the precise laser pulse timing.
- Example: SPAD-based dToF sensors used in Apple’s LiDAR Scanner on iPhones and iPads.
ToF type 2: Indirect Time-of-Flight (iToF) Sensors (for uses like in cabin, all but high end smartphones)
- How they work: These sensors do not measure the time delay directly. Instead, they measure the phase shift between the emitted laser light (which is modulated as a continuous wave) and the returned signal.
- Process: The laser emits a continuous, high-frequency amplitude-modulated light wave. The returning light has a phase shift relative to the emitted light. The sensor measures this phase shift in each pixel, which correlates directly to the distance (
distance = (speed of light × phase shift) / (4 × π × modulation frequency)).- Advantages:
- Typically lower cost and less complex pixel design than dToF.
- Excellent for shorter ranges and high-resolution depth maps.
- Disadvantages:
- More susceptible to multipath interference and can have ambiguity over longer distances.
- Performance can degrade under very strong ambient light (e.g., bright sunlight).
| Feature | ToF “Flash” Imaging RADAR/LiDAR) | Radar “Scanless” Imaging (MIMO/SAR) |
|---|---|---|
| Mechanism | Measures “Time of Flight” on a focal plane of pixels. | Measures time delay (range) and phase shift (velocity/angle) of reflected radio waves. |
| Output | A direct 3D depth map image. | A 3D/4D point cloud (Range, Azimuth, Elevation, Velocity). |
| “Pixels” | Physical sensors (Single-Photon Avalanche Diodes). | Virtual points calculated by a processor using data from an array of antennas. |
| Real-time | Truly instantaneous capture. | Requires significant computation to “form the image” from the raw data. |
To see more of the world and to collect data from the whole image requires a way to collect the data at a whole lot of points in an image. You need something like an image sensor that’s used in a camera. Now you’re going to be focusing frequencies that aren’t visible light which means you need very different lenses and as explained on the chart on screen here the different frequencies require different techniques to focus the returning beam onto an image sensor that senses the exact time the signal is received rather than as with visible light we record the intensity.
Now this recording of the time allows building up a 3D depth map. That depth map s very different than the imaging we do with our eyes but it’s yet another source of data that we combine with visible data to build that picture of the world.
Types of Radar & Lidar. (table)
Summary of radar types:
Summary of Radar Types
| Radar Type | Key Principle | Key Features | Pros | Cons | Primary Applications |
|---|---|---|---|---|---|
| Pulse Radar | Transmits short, high-power pulses and listens for echoes between them. | Simple concept, long range, high peak power. | Excellent long-range performance, good range resolution. | Requires high peak power, susceptible to interference, poor velocity resolution. | Long-range aviation, weather monitoring, defense. |
| FMCW (Frequency Modulated Continuous Wave) | Transmits a continuous wave whose frequency changes linearly over time (a “chirp”). Compares transmitted vs. received frequency. | Most common automotive type. Continuous low power, measures range and velocity simultaneously. | Low power, immune to interference from other FMCW radars, good range & velocity resolution. | More complex signal processing than pulse radar. | Automotive (Adaptive Cruise Control, Blind Spot Monitoring), altimeters. |
| MIMO (Multiple-Input, Multiple-Output) FMCW | Uses multiple transmitters and receivers with orthogonal signals (e.g., different chirps) to create a virtual array. | A subtype of FMCW that enhances angular performance. | High angular resolution (can distinguish closely spaced objects) without a large physical antenna. | Significantly increased data and computational complexity. | Advanced Automotive (imaging radar for high-level autonomy), 5G/6G communications. |
| SAR (Synthetic Aperture Radar) | Uses the movement of the radar platform (e.g., the car) to synthesize a large antenna aperture. | A processing technique applied to other radar types (like FMCW). | Extremely high angular resolution, creating detailed 2D/3D images of the environment. | Requires significant movement and immense computational power for real-time processing. | R&D for Autonomous Vehicles (environment mapping), aerospace, earth observation. |
| Doppler Radar | Measures the frequency shift (Doppler effect) of a returned wave to calculate velocity. | Often uses a continuous, single-frequency wave. | Extremely accurate velocity measurement, simple and cheap. | Cannot measure range (only velocity). | Speed guns, motion detection, sports analytics. |
Key points for Self driving cars on radar types:
- FMCW is the workhorse of modern automotive radar, found in most ADAS features today.
- MIMO-FMCW is the next evolution, enabling higher-resolution “imaging radar” crucial for self-driving cars to understand complex scenes.
- SAR is an advanced technique on the cutting edge, turning a radar into a powerful imaging sensor by using the vehicle’s motion.
- Pulse Radar is largely obsolete for standard automotive use due to the advantages of FMCW.
- Doppler Radar is too limited on its own but its principle is integrated into FMCW operation.
Summary of LiDAR Types
| LiDAR Type | Scanning Method | How it Measures Distance | Pros | Cons | Common Applications |
|---|---|---|---|---|---|
| Mechanical Spinning | Rotating assembly of lasers/receivers spins 360°. | Direct ToF (dToF): Measures time delay of a laser pulse. | 360° field of view, proven technology, long range. | Bulky, expensive, lower reliability due to moving parts. | Early self-driving cars (Waymo), robotics, mapping. |
| MEMS (Micro-Electro-Mechanical Systems) | A tiny, solid-state micro-mirror tilts to steer laser beams. | Mostly dToF. | Compact, more reliable than mechanical, cost-effective for mid-range. | Limited field of view, sensitive to vibration, micro-moving parts. | Automotive (in production cars), consumer electronics, robotics. |
| Optical Phased Array (OPA) | Electronically steers laser light with no moving parts using an array of antennas. | dToF. | True solid-state: ultra-reliable, fast scanning, no moving parts. | Complex, currently high cost, lower maturity, side lobes cause noise. | Advanced R&D for autonomous vehicles, future consumer tech. |
| Flash | No scanning. Illuminates the entire scene in a single, wide pulse like a camera flash. | Typically dToF, but can be iToF. | True solid-state, extremely robust, immune to vibration, no moving parts. | Short range (eye safety power limit), lower resolution in sunlight. | Short-range applications: smartphones (AR), robotics, in-cabin sensing. |
| Indirect ToF (iToF) | Can be combined with various methods (e.g., a static flash or a simple scanner). | Indirect ToF (iToF): Measures phase shift of modulated light. | High resolution at short range, can be made very compact and cheap. | Short to medium range, susceptible to multi-sensor interference and sunlight. | In-cabin monitoring, gesture control, smartphones (autofocus, portrait mode), robotics. |
Key points for Self driving cars on Lidar types:
- Lidar can avoid “optical illusions” and allow dealing with unrecognisable objects.
- Mechanical Spinning was the pioneer but is being phased out of production vehicles due to cost and reliability.
- MEMS is a leading hybrid solid-state technology currently in production cars.
- OPA represents the future goal of pure solid-state performance but is still in development.
- Flash 905nm is ideal for short-range needs like parking depth mapping a close distances.
- Flash 1550nm is ideal for dealing with poor visibility slightly longer range needs than 9805nm.
- iToF is not typically a primary driving sensor but is dominant for interior and very short-range 3D sensing due to its low cost and high resolution.
The industry trend is firmly moving from mechanical spinning toward solid-state solutions (MEMS, OPA, Flash) for their reliability, compact size, and potential for lower cost.
A key point here is that radar is not all radar and lidar is not all lidar. The system is used in military aircraft with scanning radars would send out multiple beams at once and give us those amazing pictures of UFOs that are still blurry due to the frequent due to the lower frequency of radar but they actually give us a picture of a whole scene. They actually give us a picture of a whole scene not just the one dot that you get from a speed camera light, not just the one dot that you get from a speed camera radar.
Similarly, all lidars are nowhere near the same. You have those scanning lidars that spin on the roof or what they call flash lidar which send out a signal like a flash bulb going off. It illuminates a whole area and detect the reflections coming back from that whole area on a sensor and those lidars operate at two key main different frequencies.
With one frequency close to visible, closer to visible light being high resolution but still being picked up our eyes so being restricted in intensity so it doesn’t cause eye problems.
The lower frequency can be sent further, goes through atmospheric conditions that can reduce the transmission of visible light but is lower resolution and the resolution of the lidar is another huge factor in what you can build up for the world. The bottom line is that none of these other sensors is a replacement for vision.
We don’t have the data base to look up the objects and work out the objects but we can get additional data. That additional data avoids optical illusions. Even the human brain with all its training over its whole lifetime can be fooled by something like an Escher drawing. Having this other data does enable us to not only perceive things we can’t perceive with human sensors but also avoid potential for optical illusions which could be very dangerous in a self-driving scenario. Now remember that the only thing that can see signs and the world as we create is visible light.
Lidar, radar, ultrasonic, they all send out one single exact frequency which means they have weak spots of things that don’t reflect well that exact frequency but they also build up an image that’s far more basic than even a monochrome black and white image because that covers the whole span of visible light frequencies. These are just one specific frequency. You could create signs that where one part reflects that frequency and another part doesn’t reflect that frequency but these signs don’t exist in the real world today.
So a lot of the things that we need to pick up can still only be picked up by visible light, but we can provide backup. Elon Musk has once said that backup can be a crutch. Allowing too much reliance on the crutch and not enough ability to map those dots that we get from cameras into real world objects.
That is the background, but is it all just a crutch as cameras alone are enough?
Do we need to know the real world objects, if we just know its shape and how fast?
Well, for a lot of things we do need to know. That crutch mechanism is one of the differences between the experience of the first time you get in a car with radar cruise control and saying wow this seems so close to self-driving but the more you experience it the more you realize their situations it just can’t cope with. The more complex the situation the more data is relied upon and the bigger the world view is needed in order to handle that complex situation and that’s where these sensors come into play.
A comparison of different systems and where things are at will be covered in a separate video.
-.-.-.-.-.-.-.-.-.-.
Background.
This section raw material from research into the video, and there is more here than is currently displayed, so comment on the video if you would like me to get more into a somewhat clean format so it can be displayed.
Flash Lidar vs Scanning Lidar.
Materials.
what materials are used to house lidars used in cars to allow the casing to be near transparent?
Of course. This is an excellent question that gets into the intersection of materials science and sensor technology.
The housing or “casing” for a automotive LiDAR sensor is called a radome (a portmanteau of radar dome). The primary function of the radome is to protect the delicate internal components (lasers, mirrors, receivers) from the environment (rain, dust, UV, scratches) while being as transparent as possible to the specific wavelengths of light the LiDAR uses.
No material is perfectly transparent, so the choice is a careful balance of optical, mechanical, and economic properties. Here are the most common materials used:
1. Polycarbonate (PC)
This is the most common and economical choice for many automotive LiDAR systems.
- Why it’s used: It’s a tough, impact-resistant engineering plastic that is easy to injection mold into complex shapes. It offers good clarity and is significantly less expensive than glass.
- Transparency: It has good transmission for the common 905 nm wavelength LiDAR systems. However, it absorbs more light at the higher 1550 nm wavelength, which can be a drawback.
- Downside: It’s soft and scratches easily, so it often requires a hard coat. It can also yellow over time due to UV exposure, which would degrade its performance. Modern polycarbonates and coatings are heavily optimized to mitigate this.
2. Polymethyl Methacrylate (PMMA) – Acrylic
Often known by brand names like Plexiglas or Lucite.
- Why it’s used: It has higher optical clarity and better resistance to UV yellowing compared to standard polycarbonate. It’s also a cost-effective polymer.
- Transparency: Excellent transmission for visible light and good for 905 nm. Its transmission at 1550 nm is poor.
- Downside: It is much more brittle and has lower impact resistance than polycarbonate, making it less suitable for applications where stone chips or impacts are a concern.
3. Glass (often Fused Silica or Borosilicate)
Used in higher-performance and more expensive LiDAR systems.
- Why it’s used: It is the gold standard for optical clarity and durability. It has excellent transmission across a very wide spectrum, including both 905 nm and 1550 nm wavelengths. It is highly resistant to scratching, chemicals, and UV yellowing.
- Transparency: Near-perfect for all common LiDAR wavelengths.
- Downside: It is heavy, expensive, and can be brittle (shattering on impact). The manufacturing and integration processes are more complex than for plastic housings.
4. Specialty Coatings: The Critical “Secret Sauce”
The base material is only half the story. To make the radome truly effective, it is always coated with multiple specialized layers. These coatings are crucial for performance:
- Anti-Reflective (AR) Coatings: These are applied to both the inner and outer surfaces. They are wavelength-specific, designed to minimize the reflection of the LiDAR’s laser wavelength (e.g., 905 nm or 1550 nm). This maximizes the amount of laser light that passes through the radome in both directions (outgoing pulse and returning reflection), increasing signal strength and range.
- Hydrophobic & Oleophobic Coatings: These create a surface that repels water and oils. This causes water to bead up and roll off (the “lotus effect”), keeping the surface clear of rain and droplets that could distort the laser beam.
- Anti-Fog Coatings: Prevents condensation from forming on the inside of the radome.
- Hard Coatings: Especially critical for plastic radomes, these ceramic or UV-cured coatings are applied to drastically improve scratch resistance.
Summary Table
| Material | Pros | Cons | Best For |
|---|---|---|---|
| Polycarbonate (PC) | Tough, impact-resistant, low cost, moldable | Scratches easily, can yellow with UV, poorer 1550nm transmission | Economical, high-volume 905nm LiDAR systems |
| Acrylic (PMMA) | Excellent clarity, good UV resistance, low cost | Brittle, poor impact resistance, poor 1550nm transmission | Applications where impact risk is low |
| Glass (Fused Silica) | Superior optical clarity, scratch-resistant, inert, excellent for all wavelengths | Heavy, expensive, can shatter | High-performance, long-range LiDAR (both 905nm & 1550nm) |
Key Consideration: Wavelength Matters
The choice of material is heavily influenced by the LiDAR’s operating wavelength:
- 905 nm LiDAR: Can use Polycarbonate, Acrylic, or Glass. PC is the most common due to the cost/performance balance.
- 1550 nm LiDAR: Requires Glass (or other specialized crystals like Germanium, which are prohibitively expensive for cars). Most plastics absorb too much light at this wavelength, making them impractical.
In conclusion, while you might see a simple “plastic” cover, it is almost always a highly engineered polycarbonate or glass element with a sophisticated suite of nano-scale coatings tailored to the specific sensor’s needs. This allows it to be “near transparent” to the laser’s eye while being tough enough to survive on a car.
i assume the “radome” term is not used for flash lidar units
You’ve made an excellent and astute observation. Your assumption is correct.
The term “radome” is technically specific to radio frequencies (radar) and is not the standard term for the housing over optical systems like LiDAR, including Flash LiDAR.
The more accurate and general term for the protective window over a LiDAR sensor (or any optical camera system) is an “optical window” or “aperture window.”
However, in the practical, cross-disciplinary world of automotive sensors, you will very often hear engineers and manufacturers casually use the term “radome” for any sensor housing, be it for radar, LiDAR, or cameras. It has become a convenient shorthand for “the protective dome-like cover in front of the sensor.”
But for precision, especially when discussing the material properties required, here’s the breakdown:
Why “Optical Window” is the Preferable Term for Flash LiDAR
- Fundamental Physics:
- Radar uses long-wavelength radio waves (mmWave). A Radome is designed to be transparent to these RF wavelengths. Its design focuses on minimizing signal attenuation and distortion caused by the material’s dielectric properties.
- LiDAR (including Flash LiDAR) uses short-wavelength infrared light (NIR). An Optical Window is designed to be transparent to specific optical wavelengths (e.g., 905 nm or 1550 nm). Its design focuses on maximizing transmittance and minimizing optical distortion (like lensing effects), scattering, and reflection.
- The Criticality of Coatings for Optical Windows:
While both need specialized coatings, the coatings on a LiDAR window are even more critical and complex due to the physics of light.- Anti-Reflective (AR) Coatings: These must be tuned precisely to the laser’s wavelength. Even a few percent of reflection loss on both the outgoing and return trip can significantly degrade the sensor’s range and performance.
- Field of View (FoV): A Flash LiDAR illuminates a wide area in a single “flash.” Its optical window must maintain clarity and minimal distortion across this entire wide field of view, which is a different challenge than a scanning LiDAR’s window, which might only need clarity in the specific direction the beam is pointing at any given moment.
Material Considerations Specific to Flash LiDAR
The material requirements for a Flash LiDAR’s optical window are largely the same as for any other LiDAR type (as detailed in the previous answer). However, one aspect can be even more critical:
- Optical Homogeneity: The material must be perfectly uniform. Any bubbles, striations, or variations in thickness can act like a lens, distorting the carefully engineered “flash” pattern of light. This can create errors in the perceived shape and location of objects.
- Polycarbonate is still a top contender due to its toughness and moldability into the complex shapes often needed to integrate seamlessly into a car’s body.
- Glass is used in high-end systems for its superior optical properties and scratch resistance, but its cost and weight are drawbacks.
In summary:
You are right to question the use of “radome.” While it’s commonly used as industry slang, the technically precise term for the housing over a Flash LiDAR (or any LiDAR) is an optical window or aperture window. This distinction highlights the critical difference between manipulating radio waves and light.
could a car windscreen be made of Fused Silica or Borosilicate?
That’s a fantastic and insightful question. While fused silica and borosilicate glass have exceptional properties ideal for sensors like LiDAR, using them for an entire car windscreen is highly impractical and not feasible for a multitude of reasons, primarily centered on cost, safety, and manufacturing.
Here’s a breakdown of why these materials are perfect for a sensor window but impossible for a windscreen:
1. Cost: Prohibitively Expensive
This is the single biggest reason.
- Car Windscreen (Laminated Glass): Made from soda-lime glass, which is incredibly cheap to produce in massive sheets. A windscreen might cost an automaker a few hundred dollars.
- Fused Silica / Borosilicate: These are specialty glasses. Fused silica is one of the most expensive optical glasses. Producing a single, flawless piece the size of a windscreen (over 1.5 m²) would be astronomically expensive—potentially tens of thousands of dollars. This alone makes it a non-starter for a mass-market consumer product.
2. Safety: It Would Shatter Dangerously
Modern car windscreens are a marvel of safety engineering, and specialty glasses fail these requirements.
- Laminated Safety Glass: A car windscreen is made of two layers of soda-lime glass with a layer of polyvinyl butyral (PVB) plastic sandwiched in between. When impacted, the glass may crack, but the plastic layer holds the shards together, preventing them from flying into the passenger compartment and maintaining structural integrity to prevent roof collapse in a rollover.
- Fused Silica & Borosilicate: These are very hard and have excellent thermal properties, but they are also brittle. If struck with the force of a pedestrian’s head or a rock, they would not flex like laminated glass. Instead, they would shatter into sharp, dangerous pieces, a catastrophic safety failure.
While you could theoretically laminate fused silica, it would be an engineering nightmare and multiply the already absurd cost.
3. Manufacturing and Forming: Nearly Impossible
- Soda-Lime Glass: Can be easily heated and bent (sag bent) into the complex, compound curves of a modern car windscreen on a production line every few minutes.
- Fused Silica: Has an extremely high melting point and very high viscosity when molten. Forming it into a large, complex, and optically perfect shape like a windscreen is beyond current manufacturing capabilities for volume production. Any minor flaw would create massive optical distortion for the driver.
4. Weight: A Significant Penalty
- Fused silica is denser than standard soda-lime glass. A windscreen made from it would be noticeably heavier, negatively impacting fuel efficiency and vehicle handling.
The Real-World Solution: The “Camera Window” Patch
Your line of thinking is exactly how automakers are solving this problem, just in a more pragmatic way. They aren’t making the entire windscreen out of specialty glass; they are integrating a small, optimized “window” or “patch” for the sensors.
This is most common for cameras mounted near the rearview mirror that enable Advanced Driver Assistance Systems (ADAS) like lane-keeping and automatic emergency braking.
- The Problem: Standard soda-lime windscreen glass can cause optical distortions (like double images or ripples) and has heating elements (defroster lines) that can block the camera’s view.
- The Solution: In a specific, critical area of the windscreen in front of the camera, the manufacturer:
- Uses a higher-quality, more optically neutral glass (sometimes a form of borosilicate or low-iron glass) in that small section during the lamination process.
- Applies a special “frit” pattern (the black enamel dots around the edge) to hide the transition between the two different types of glass.
- Ensures no defroster lines or radio antennas cross this specific “pristine” patch.
This gives the sensor the clear, undistorted view it needs without the impossible cost and safety issues of making the entire windscreen from fused silica.
In conclusion: While fused silica and borosilicate are the perfect materials for protecting a small, expensive LiDAR or camera sensor, their properties make them completely unsuitable for a full car windscreen due to catastrophic cost, safety, and manufacturing constraints. The automotive industry’s clever solution is to seamlessly integrate a small “island” of better glass exactly where the sensors need it.
Focusing Lidar.
Summary Table: Focusing Comparison
| Feature | 905 nm Flash LiDAR | 1550 nm Flash LiDAR |
|---|---|---|
| Transmitter Lens | Standard Glass, Plastic | Germanium, Zinc Selenide |
| Receiver Lens | Standard Glass, Plastic | Germanium, Zinc Selenide |
| Sensor Material | Silicon (Si) | Indium Gallium Arsenide (InGaAs) |
| Key Advantage | Low Cost (uses standard camera components) | High Power & Long Range (better eye safety) |
| Key Disadvantage | Short Range (limited by eye safety) | Very High Cost (exotic materials and sensors) |
RADAR Types.
can imaging without scanning be practical at radar frequencies?
Excellent question that gets to the heart of radar and imaging technology. The direct answer is:
Yes, imaging without scanning is not only practical at radar frequencies, it is a fundamental and widely used technology, most famously in SAR (Synthetic Aperture Radar). However, it works very differently from optical “flash” imaging like a camera.
The key is to understand why a simple “radar camera” that instantly captures a 2D image in a single flash of energy is physically impractical, and how engineers overcome this with clever signal processing.
The Core Problem: The Physics of Wavelength
The challenge is the massive difference in wavelength between light and radio waves.
- Optical Wavelength (LiDAR/Camera): ~900-1550 nanometers (0.0009 – 0.00155 mm). A sensor with millions of pixels (e.g., 10×10 microns each) can be built that is many wavelengths across, allowing it to resolve detail and form an image directly.
- Radar Wavelength (Automotive): ~3.9 mm (77 GHz) to ~39 mm (7.5 GHz). To achieve even a modest angular resolution, a radar antenna would need to be physically enormous.
The Angular Resolution Problem:
The ability to distinguish two close objects is determined by the formula:θ ≈ λ / D
where θ is the resolution angle, λ is the wavelength, and D is the antenna aperture (size).
To get a 1-degree resolution at 77 GHz (λ=3.9mm), you need an antenna aperture of about 22 cm. To get a resolution comparable to a very coarse camera (e.g., 100×100 pixels), you’d need an antenna dish several meters wide, which is impossible on a car.
How “Scanless” Radar Imaging Works in Practice
Since we can’t build a giant antenna on a car, we use techniques to synthesize a large antenna. Here are the primary methods:
1. Synthetic Aperture Radar (SAR)
This is the most famous and powerful technique for radar imaging.
- Concept: Use the motion of the radar platform itself (e.g., a car, plane, or satellite) to simulate a massive antenna. The radar sends pulses from different positions along the vehicle’s path. By combining the returning signals with precise knowledge of the platform’s location, a high-resolution 2D image can be synthesized.
- Automotive Application: While traditionally used on satellites and aircraft, SAR is being adapted for cars. A car driving down a road can use its movement to build a high-resolution image of its surroundings—enough to identify guardrails, signs, manhole covers, and even the structure of bridges. This is not for real-time crash avoidance but for creating ultra-high-definition radar maps for localization and automated driving.
2. MIMO (Multiple-Input, Multiple-Output) Radar
This is the technology that enables modern high-resolution automotive radar without moving parts.
- Concept: Instead of one large antenna, use a virtual array of many small antennas. A MIMO radar has multiple transmitting (Tx) and receiving (Rx) antennas. By using coding to distinguish signals from different transmitters, the radar can synthesize a virtual array with a much larger aperture.
- Example: A radar with 3 Tx and 4 Rx antennas can uniquely distinguish 3 x 4 = 12 signal paths, creating a virtual array of 12 elements.
- Modern high-end automotive radars might have 12 Tx and 16 Rx channels, creating a virtual array of 192 elements, all on a chip smaller than a credit card.
- This is “Scanless”: A digital beamforming MIMO radar can simultaneously receive signals from all directions in its field of view. It digitally processes these signals to form a “point cloud” image of the environment without any mechanical scanning or electronic scanning of a single beam. It’s the closest equivalent to a “flash” radar in practice.
3. Passive Radar Imaging
This is a more niche approach where the system doesn’t transmit its own signal but instead listens to reflections of ambient signals (e.g., from FM radio, TV towers, or cellular networks). By using sophisticated processing, it can form an image of the environment. This is not common for automotive applications due to complexity and reliability issues.
Comparison: Radar “Imaging” vs. Optical Imaging
| Feature | Optical “Flash” Imaging (Camera/LiDAR) | Radar “Scanless” Imaging (MIMO/SAR) |
|---|---|---|
| Mechanism | Measures light intensity on a focal plane of pixels. | Measures time delay (range) and phase shift (velocity/angle) of reflected radio waves. |
| Output | A direct 2D intensity image. | A 3D/4D point cloud (Range, Azimuth, Elevation, Velocity). |
| “Pixels” | Physical sensors (photodiodes). | Virtual points calculated by a processor using data from an array of antennas. |
| Real-time | Truly instantaneous capture. | Requires significant computation to “form the image” from the raw data. |
Conclusion
So, is imaging without scanning practical at radar frequencies? Absolutely.
- MIMO Radar provides a “scanless” method for real-time, high-resolution perception on vehicles, creating a dynamic point cloud image of the environment.
- SAR uses the vehicle’s motion to synthesize an enormous antenna, creating extremely high-resolution ground maps.
They don’t work like an optical camera because the physics of their wavelengths won’t allow it. Instead, they leverage arrays of antennas and immense computational power to synthesize an image, making this form of “imaging without scanning” not just practical, but the industry standard for modern automotive radar
