As the cost of radar and lidar technologies falls, a plethora of new applications emerge. Both the automotive and medical technology sectors have been quick to respond to the opportunities.
Radar, short for Radio Detection and Ranging, operates on a simple principle. Transmitted pulses of radio waves bounce off objects and return as echoes. Detection is provided by the echo, the range – or distance – is calculated from changes present in the echo.
Radar systems operate over a vast range of frequency bands, from about 5 MHz to 300 GHz. The IEEE came up with a system of classification based on standard letter designations for the various bands. These designations comprise HF, VHF, UHF, L, S, C, X, Ku, K, Ka, V, W, and mm. The latest version of the standard, IEEE 521-2019,
Microwave radar, classified as between 300 MHz to 30 GHz, can be used for imaging, non-contact measurement of chest motion to monitor breathing, detecting a person’s movement in bed for sleep tracking, and heart rate measurement, amongst other things. In care homes, the elderly may regard radar as less intrusive than cameras, so it can be used for fall detection.
Unlike cameras, radar transmitters and receivers may be built into walls, so the systems do not even need to be visible. The power levels of most of these systems are in the microwatt to milliwatt range.
Autonomous technologies predicted to drive biggest growth
The automotive sector is predicted to be the biggest growth sector for radar in the next few years as advanced driver assist systems (ADAS) and semi-autonomous and autonomous vehicles drive demand.
Today, automotive radar applications are primarily adaptive cruise control, assisted braking and blind-spot detection, but as driving autonomy develops there will be a need for an all-round view of other vehicles and a way of determining how they are moving. For example, are vehicles in the rear-view mirror receding or getting closer, and at what rate?
The building blocks of an automotive radar module are the antenna, a radio frequency (RF) front end, and a digital processing unit. As always, the semiconductor industry is striving for greater integration to reduce the size and cost of chips and modules while increasing their reliability by minimizing the number of components needed to implement the system.
The typical range of automotive FMCW radar systems is up to 800 feet or 250 meters. At 77 GHz, RF transmitter power levels are low, around +10 to +13 dBm for automotive systems, and several transmitters and receivers can be integrated into a single monolithic microwave integrated circuit (MMIC).
In radar front ends, more exotic and expensive compound semiconductor technologies — silicon–germanium (SiGe) and Gallium arsenide (GaAs) — are being superseded by complementary metal-oxide semiconductor (CMOS). To date, these front ends have not been combined with microcontrollers to create single-chip systems, but that’s likely to happen soon.
For the time being, designers must work with chipsets: a monolithic microwave integrated circuit (MMIC) for the transceiver and a complementary microcontroller (MCU) or system-on-chip (SoC) for data processing, as in the basic system shown below.
Automotive radar architecture type
Lidar: greater accuracy, greater speed, different limitations
Radar and lidar systems work in broadly similar ways but while radar engineers describe their electromagnetic spectrum in terms of frequency, lidar (optical) engineers usually talk about wavelength. The lidar wavelength range is mostly near the infrared spectrum. It spans 750 nm to just over 1.5 µm. Like radar, pulsed time-of-flight (ToF) or FMCW techniques can be employed to detect and map objects in 3D.
The greatest advantage of lidar over radar is its resolution, even compared to high frequency, high-resolution radar.
Now found in smart phones, the principle of lidar is not too challenging. However, the cost of automotive-grade lidar solutions has been a significant factor limiting adoption. But costs are falling rapidly. Systems that cost tens of thousands of dollars just a few years ago can now be built for hundreds of dollars.
A $100 lidar module for automotive applications was announced in 2020, a clear demonstration of progress in the technology.
As costs have fallen, the number of applications and potential future applications has grown, particularly for lidar’s 3D imaging capabilities. Applications now include everything from topographic surveys and soil analysis to pollution monitoring for CO2, SO2 and methane gas.
Lidar is now found in drones, robots and other types of machines. Its use in cars and other vehicles is perhaps capturing the most interest from component manufacturers.
Lidar in automotive applications
As with radar, the automotive sector is seen as a key growth opportunity for lidar. In this sector, lidar systems operate at 905 nm or 1,550 nm. The major hardware blocks of a typical lidar system are transmit, receive, beam steering, optics, readout, and power and system management.
Proponents of lidar point to its ability to quickly generate detailed, accurate maps of the surroundings of a vehicle in 3D. They also point out how well it can detect small objects, thanks to its relatively high resolution compared with radar. A recent advance is 4D lidar, delivered from a module that not only establishes the distance to an object and its x, y, z coordinates but can also plot velocity as the fourth dimension.
Three main factors have so far restricted more widespread adoption of lidar in the automotive world: cost, mechanical complexity (which is both a cost factor and a maintenance headache), and performance in poor weather conditions.
The mechanical complexity of automotive lidar results from the need for a beam-steering mechanism for both the lidar transmitter (a laser) and its receiver to provide a 360-degree view around a vehicle.
However, even as everything becomes simpler and more affordable, lidar is hamstrung by its performance limitations in poor weather conditions. In rain, snow and fog, it struggles with the same challenges as vision systems – human or otherwise.
Despite these challenges, most self-driving car companies have reportedly shown interest for its planned foray into the automotive world.
Complementary technologies: vision, ultrasound and infrared
Automakers aren’t relying on one or two sensing technologies for their assisted and autonomous vehicles. Ultrasound is a cheap and proven technology for parking assistants. Infrared cameras work in darkness, in fog and are not affected by solar glare, so they have some advantages over lidar. And as the goal is for cars to be able to “see” at least as well as or better than humans, video cameras complement other technologies in ADAS and autonomous vehicle designs.
Sensors found in connected cars today
Most car makers seem to believe that it’s impractical to use video exclusively. Processing multiple moving images at high speed requires enormous compute power which is difficult to provide within a vehicle, and it takes too long to send all this data to the cloud and back.
Most vehicle makers are therefore using data fusion algorithms to process a combination of video, radar, lidar and other data to create systems with optimized capabilities.
However, Tesla has been vocal about the limitations of lidar, arguing that with the right neural (AI) processing capabilities, only vision cameras are needed. The company’s cars previously used cameras alongside radar, but in May 2021, Tesla started shipping Model 3 and Model Y cars with driver-assist systems that rely on just eight cameras – no radar or lidar.
Technology forecasting is a dangerous business, but many component makers are betting heavily on radar and lidar systems enjoying a golden period of growth in the next few years.
For the most part, the analysts go along with that idea. But vision systems and the processors that go in them present an exciting opportunity, too