As the driver of a car, you probably don’t give it a second thought as you drive under a bridge. The sensors on your car, however, are working hard to compensate for the light differences and convey accurate information to the vehicle’s processors.
As advanced driver assistance systems progress to deliver enhanced safety, designers must be more vigilant and realistic about the key characteristics of image sensors they select.
We talked recently with Avnet vision specialist Griffin Peterson to examine the most important considerations when selecting image sensors for automotive applications.
Like any technology, image sensors have many key characteristics and features. It’s important to understand your priorities so make the right choices for your application.
Here are some considerations that should weigh into your selection criteria.
- Resolution: You don’t necessarily want the highest-possible resolution because that data taxes processing systems and slows down reaction time. Consider whether your application needs to identify lane lines or a pedestrian 100 meters ahead. We find in transportation applications, 1080p resolution is a good middle ground. That resolution provides a good understanding of the world around you and, even with a 10-camera system, your processor can respond well to the data it receives.
- Pixel size: This is all about light sensitivity. Often overlooked, pixel size dictates how well an image sensor can capture light in the dark. We have a couple sensors that can be used in dim moonlight and the pixel size is large enough that you can capture the image well.
- Dynamic range, high dynamic range: This enables sensors to see both light and dark areas simultaneously. Think the example of driving under a bridge. The sensor needs to be able to see the scene before the bridge, under the bridge and beyond the bridge. It may be dark under the bridge and bright before and after it. High dynamic range sensing enables you to accurately capture all that data regardless.
- Frame rate: This refers to how many frames a sensor can output per second. Most transportation uses call for 30-40 frames per second, although 60 frames per second is sometimes requested. Frame rate and resolution are closely related and influence how much data needs to be processed.
- Color filter array: This technology sets up sensitivity to color (or the light’s wavelengths). It refers to the chromacity and specificity of pixels in a sensor and indicates the specificity of each pixel to wavelengths of light. New color filter arrays are being developed to improve sensor responsiveness.
- LED flicker mitigation: This is how a sensor manages flashing light in the environment. It’s a feature that synchronizes frame capture with external LEDs in a scene like at a stop light, for example. This technology automatically adjusts the timing of the image sensor with the stoplight’s LEDs.
- Global shutter: This is a type of shutter that captures all pixels in an image sensor simultaneously. It’s especially relevant for eye tracking and driver monitor applications. This is a different image capture process compared to rolling shutting.
Authored By: Jason Struble, Avnet transportation supplier manager