Benchmark Comparison

EyeQ™6H vs. NVIDIA Jetson AGX Orin

EyeQ™6H vs.
NVIDIA Jetson AGX Orin

EyeQ™6H delivers high-performance AI at the edge, deploying DNNs, transformer models, and ML frameworks for real-time classification, object detection, and segmentation. Our most advanced SoC yet is designed to transform raw sensor data into precise, actionable insights.

Image Classification

EyeQ™6H

NVIDIA Jetson AGX Orin 64GB*

TOPS

34 TOPS

275 TOPS

Single Stream Latency (msec)

Mode

Power

Power (MaxQ)

Unconstrained (MAXN)

ResNet-50

0.5 msec

1.64 msec

0.64 msec

Power mode refers to testing a chip in its original state without any modifications.
Unconstrained mode refers to testing without any artificial power or thermal restrictions.

* Reported by Nvidia and MLPerf.

Vision Transformer

Model

Resolution

Params

MACs

EyeQ™6H

(Int8)*

NVIDIA Jetson AGX Orin 32GB

(FP16)**

Mode

Power

Unconstrained

EfficientViT-B1

224x224

9.1M

0.52G

0.564 msec

1.48 msec

EfficientViT-B2

224x224

24M

1.6G

0.932 msec

2.63 msec

* EyeQ™6H is designed for Int8 computation, to optimize performance, efficiency and accuracy by leveraging advanced QAT (quantization aware training) and PTQ (post training quantization) techniques.

** Reported results for Orin are in FP16.

Image Classification

Power mode refers to testing a chip in its original state without any modifications.
Unconstrained mode refers to testing without any artificial power or thermal restrictions.

* Reported by Nvidia and MLPerf.

Vision Transformer

* EyeQ™6H is designed for Int8 computation, to optimize performance, efficiency and accuracy by leveraging advanced QAT (quantization aware training) and PTQ (post training quantization) techniques.

** Reported results for Orin are in FP16.

Architecture of Efficiency

With a unique and highly efficient architecture of diversified accelerators, EyeQ™ achieves state-of-the-art computer vision performance within a low-power envelope

Heterogeneous Computing

Using the most suitable core for each task

From general-purpose CPU cores to high compute density accelerators, including deep learning neural networks

CPU

Central Processing Unit

MPC

Multi-threaded Processor Cluster

More versatile than a GPU, and with higher efficiency than any CPU

VMP

Vector Microcode Processor

A wide vector (VLIW and SIMD) machine with exceptional performance for short integral types common in computer vision and deep learning algorithms.

PMA

Programmable Macro Array

A CGRA dataflow machine. Its unique architecture delivers outstanding performance for dense computer vision and deep learning algorithms that are unachievable in classic DSP architecture.

XNN

Deep Learning Accelerator

Dedicated high-performance AI engine. The main source of horse power for convolutional neural networks.

/

GENERAL-PURPOSE COMPUTE

CPU

Central Processing Unit

MPC

Multi-threaded Processor Cluster

More versatile than a GPU, and with higher efficiency than any CPU

VMP

Vector Microcode Processor

A wide vector (VLIW and SIMD) machine with exceptional performance for short integral types common in computer vision and deep learning algorithms.

PMA

Programmable Macro Array

A CGRA dataflow machine. Its unique architecture delivers outstanding performance for dense computer vision and deep learning algorithms that are unachievable in classic DSP architecture.

XNN

Deep Learning Accelerator

Dedicated high-performance AI engine. The main source of horse power for convolutional neural networks.

DL PERFORMANCE / COMPUTE DENSITY

/

Explore Solutions Powered by EyeQ™

/

Hands-off | Eyes-on

Mobileye SuperVision™

/

Hands-off | Eyes-off

Mobileye Chauffeur™