technology · noiva
intelligence in the infrastructure.
we instrument the environment — not the vehicle. sensing, planning and v2x control, running on the zone itself.
noiva · in motion
how it works
four steps. one loop.
01
perceive
infrastructure cameras capture
02
process
centralized ai computes
03
communicate
v2x transmits commands
04
execute
vehicle acts accordingly
01
perceive
infrastructure cameras capture
02
process
centralized ai computes
03
communicate
v2x transmits commands
04
execute
vehicle acts accordingly
the stack
one sensor. every task.
classical autonomy relies on a stack of cameras, lidar, radar and gps. noiva collapses that stack into a single visual feed — every downstream layer of perception, planning and control runs on the same camera.
noiva
camera-only autonomy
Sensors
Perception
Planning
Control
Camera
Lane Detection
Traffic Light Detection
Traffic Sign Detection
Object Detection & Tracking
Free Space Detection
Localization
Route Planning
Prediction
Behavior Planning
Trajectory Planning
HD Map
PID Controller
Model Predictive Control
Others
every perception, planning and control task on the right-hand side runs on a single camera feed — no lidar, no radar, no gps required.
Sensors
1 task
Camera
Perception
6 tasks
Lane Detection
Traffic Light Detection
Traffic Sign Detection
Object Detection & Tracking
Free Space Detection
Localization
Planning
5 tasks
Route Planning
Prediction
Behavior Planning
Trajectory Planning
HD Map
Control
3 tasks
PID Controller
Model Predictive Control
Others
every task on the right runs on a single camera feed — no lidar, no radar, no gps.
deep dive
key numbers
in practice.
< 100ms
v2x control latency
0
lidar per vehicle
~70%
cost reduction
weeks
to commission a zone
