Skip to main content

As one of Mobileye’s partners, BMW, explains: Level 3 autonomous is “eyes off,” Level 4 is “mind off” and Level 5 is “driver off.” Collectively, we can call Level 3 and higher, “autonomous driving.” The move to Level 3 from the market’s current Level 1 and Level 2 systems is a huge step, in that allowing the driver to disengage (eyes off) requires incremental sensors and software, high-definition maps, complex driving policy algorithms, and redundancy in many vehicle systems. The jump from Level 2 to Level 3 is technically challenging and will take substantial work, but the tools are there and no scientific breakthrough is required.

Our automaker partners currently use Mobileye technology to support Level 1 and Level 2 ADAS systems. Over the last two years, development work on higher-level vehicles has accelerated. Mobileye is working with our partners BMW and Intel Corp. to put a fully autonomous vehicle into serial production beginning in 2021. Mobileye also recently partnered with Delphi Automotive to create an autonomous platform for use by many automobile manufacturers by 2019. 

Mobileye’s highly-robust object and lane detection algorithms are the building blocks for autonomous vehicles, but there is substantially more to be done. We break the problem down into the three technology pillars that are necessary to enable autonomous driving — Sensing, Mapping and Driving Policy. Mobileye has the technical capability to meet these challenges.

Sensing

Perception of a comprehensive Environmental Model breaks down into four main challenges:

  • Freespace: determining the drivable area and its delimiters
  • Driving Paths: the geometry of the routes within the drivable area
  • Moving Objects: all road users within the drivable area or path
  • Scene Semantics: the vast vocabulary of visual cues (explicit and implicit) such as traffic lights and their color, traffic signs, turn indicators, pedestrian gaze direction, on-road markings, etc.

Mapping

The need for a map to enable fully autonomous driving stems from the fact that functional safety standards require back-up sensors – “redundancy” – for all elements of the chain – from sensing to actuation. Within sensing, this applies to all four elements mentioned above.

While other sensors such as radar and LiDAR may provide redundancy for object detection – the camera is the only real-time sensor for driving path geometry and other static scene semantics (such as traffic signs, on-road markings, etc.). Therefore, for path sensing and foresight purposes, only a highly accurate map can serve as the source of redundancy. In order for the map to be a reliable source of redundancy, it must be updated with an ultra-high refresh rate to secure its low Time To Reflect Reality (TTRR) qualities. To address this challenge, Mobileye is paving the way for harnessing the power of the crowd: exploiting the proliferation of camera-based ADAS systems to build and maintain in near-real-time an accurate map of the environment.

Mobileye’s Road Experience Management (REMTM) is an end-to-end mapping and localization engine for full autonomy. The solution is comprised of three layers: harvesting agents (any camera-equipped vehicle), map aggregating server (cloud), and map-consuming agents (autonomous vehicle). The harvesting agents collect and transmit data about the driving path’s geometry and stationary landmarks around it. Mobileye’s real-time geometrical and semantic analysis, implemented in the harvesting agent, allows it to compress the map-relevant information – facilitating very small communication bandwidth (less than 10KB/km on average). The relevant data is packed into small capsules called Road Segment Data (RSD) and sent to the cloud. The cloud server aggregates and reconciles the continuous stream of RSDs – a process resulting in a highly accurate and low TTRR map, called “Roadbook”.

The last link in the mapping chain is localization: in order for any map to be used by an autonomous vehicle, the vehicle must be able to localize itself within it. Mobileye software running within the map-consuming agent (the autonomous vehicle) automatically localizes the vehicle within the Roadbook by real-time detection of all landmarks stored in it.

Further, REMTM provides the technical and commercial conduit for cross-industry information sharing. REMTM is designed to allow different OEMs to take part in the construction of this AD-critical asset (Roadbook) while receiving adequate and proportionate compensation for their RSD contributions.

Driving Policy

Where sensing detects the present, driving policy plans for the future. Human drivers plan ahead by negotiating with other road users mainly using motion cues – the “desires” of giving-way and taking-way are communicated to other vehicles and pedestrians through steering, braking and acceleration. These “negotiations” take place all the time and are fairly complicated – which is one of the main reasons human drivers take many driving lessons and need an extended period of training until we master the art of driving. Moreover, the “norms” of negotiation vary from region to region as the code of driving in Massachusetts, for example, is quite different from that of California, even though the rules are identical.

The challenge behind making a robotic system control a car is that for the foreseeable future the “other” road users are likely to be human-driven, therefore in order not to obstruct traffic, the robotic car should display human negotiation skills but at the same time guarantee functional safety. In other words, we would like the robotic car to drive safely, yet conform to the driving norms of the region.

Mobileye believes that the driving environment is too complex for hand-crafted rule-based decision making. Instead we adopt the use of machine learning to “learn” the decision making process through exposure to data. Mobileye’s approach to this challenge is to employ what is called reinforcement learning algorithms trained through deep networks. This requires training the vehicle system through increasingly complex simulations by rewarding good behavior and punishing bad behavior. Our proprietary reinforcement learning algorithms add human-like driving skills to the vehicle system, in addition to the super-human sight and reaction times that our sensing and computing platforms provide. It also allows the system to negotiate with other human-driven vehicles in complex situations. Knowing how to do this well is one of the most critical enablers for safe autonomous driving.

For more details on the challenges of using reinforcement learning for driving policy and Mobileye’s approach to the problem, please see: S. Shalev-Shwartz, S. Shammah and A. Shashua. Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving. NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems: Dec., 2016.