Redundancy is essential when engineering safety-critical systems. The goal is to equip a system with multiple components or subsystems that perform the same function, so that if one were to fail, the overall system could still complete its task safely. Autonomous vehicles, more so than many other technological engineering feats, require exceptional precision, accuracy, and sophistication, and their tolerance for failure is virtually zero. This makes redundancies critical to understand and implement. But when it comes to the true redundancy of sensing systems, not all AV platforms are the same.
Fusion vs Redundancy
The common practice across the industry is to equip autonomous vehicles with a multitude of sensors, including cameras, radar, and LiDAR. In many AV platforms, these sensors combine to create a single world model (or digital construct of the vehicle’s environment) in a process known as “sensor fusion.” While sensor fusion for autonomous driving may appear to offer redundancy, in essence, it is really offering complementary sensors, as all the sensors together are relied upon to create a single world model. Multiple sensors, one world model, one AV system.
Mobileye’s differentiated approach of True Redundancy™ is to separate the sensors into two channels – one for cameras and another for radar and LiDAR – and task both with sensing all elements of the driving environment. In this way, we achieve full system redundancy by having each of those channels create their own independent and diverse world models, each filtered independently through our Responsibility-Sensitive Safety framework. Multiple sensors, multiple world models, multiple AV subsystems.
To ensure that each subsystem is capable of operating independently of the other, our R&D team is running two separate fleets of developmental AVs: one using only cameras (with no radar or LiDAR), and another using only radar and LiDAR (with no cameras). When combined into a complete, production-ready AV, the camera-only subsystem becomes the backbone, while the radar/LiDAR subsystem serves as a diversified and redundant safety back-up.
You can read more about True Redundancy here.
By splitting the self-driving platform into two subsystems that can each operate on their own, we are able to build a more reliable (and therefore safer) AV. As our CEO Prof. Amnon Shashua framed it, True Redundancy “is like having both iOS and Android smartphones in my pocket and asking myself: What is the probability that they both crash simultaneously?” By the same token, the likelihood of a complete system failure is drastically reduced when you have two redundant and diverse subsystems operating independently.
True Redundancy also yields a faster, more agile development process. Perfecting the technology required for a vehicle to operate autonomously demands extensive testing and validation. By separating our AV platform into two independent subsystems, development of (and subsequent updates to) each subsystem can be validated on a much smaller data set. That can mean the difference between tens of thousands (instead of millions) of hours of data for validation, and that comparative agility means we can safely get our AV platform out on the road faster than we’ve found we could with a sensor-fusion approach.
Tomorrow’s Tech Today
An added benefit of developing our self-driving platform based on these two independent pillars is the ability employ the camera-based subsystem for ADAS as well. With Mobileye SuperVision™, we have taken the surround-view array from our AV R&D program and applied it to our most advanced driver-assistance system to date, offering hands-free ADAS capabilities. So not only will True Redundancy make AVs safer tomorrow; it can make human-driven passenger vehicles safer today.
Click here to learn more about Mobileye SuperVision and how it benefits from our AV R&D.
To stay up to date on the latest at Mobileye