
I’ll be honest. When I first started talking to AV teams about data, I assumed the problem was volume. More miles. More clips. More events.
It’s true. The teams doing the most rigorous safety do want more data, but it's not just about quantity. They want verified data. Data they can point to and say, "This is what happens on real roads, in real conditions, before a real crash." Not what we simulated. Not what we scripted. What actually happened.
That distinction (verified vs. purely voluminous) is the whole reason the partnership we announced with Vay earlier this month feels significant to me. Not just for remote driving. For the whole industry.
What Vay is actually doing (and why it’s smarter than it sounds)
Vay built the world’s first remotely driven car rental service. Professional Remote Drivers in a controlled environment, called Remote Driving Center, with no fatigue, no distraction, no impairment, operating vehicles that move freely through real urban streets.
Instead of trying to engineer human judgment out of the loop, Vay leaned into it and structured everything around making that human judgment as effective as possible.
What we’re now doing is adding the intelligence layer that tells those Remote Drivers what’s about to happen before it appears on their screen.
10 billion miles. That’s not a marketing number.
Nexar has a network of 350,000 cameras capturing 100 million fresh miles of road data every month. Every weather condition, road type, and driver behavior pattern you can think of. The cumulative archive is over 10 billion miles.
Out of that, we built BADAS, our collision anticipation model. Trained entirely on real crashes, near-misses, and evasive maneuvers from our live network. It achieves 0.948 AP precision, predicts crashes 4.9 seconds before impact, and ranks #1 on all four major benchmarks.
The thing that makes BADAS different isn’t the precision score. It’s what it was trained on. Academic models are often built from a few hundred curated clips. Commercial systems frequently rely on synthetic data. BADAS was built on what actually happens, including every edge case no engineer thought to script and every behavioral pattern that shows up in the seconds before a crash.
Simulation builds models of how the world should behave. Real-world data builds models of how it does.
That gap matters enormously when the stakes are high.
What this actually does for Vay’s Remote Drivers
BADAS doesn’t just detect what’s visible on screen. It identifies the pre-collision behavioral patterns that precede a crash: sudden decelerations, pedestrian hesitation, vehicles drifting into blind spots, micro-behaviors that correlate with imminent impact.
The practical result: Remote Drivers get alerts that are meaningful, not noisy. Fewer false positives. More time to act. Earlier, more precise signal exactly where intervention has the highest impact.
This is the first known deployment of a large-scale, real-world incident prediction model designed specifically for Remote Drivers operating on public roads.
Why I think this matters beyond remote driving
There’s a lot of debate in the AV industry about what the right architecture looks like: full autonomy vs. assisted driving vs. remote operation. The honest answer is probably all of the above, at different points in time, in different use cases.
What this partnership demonstrates is something I think gets undervalued: human-AI collaboration, done well, can outperform either in isolation. And the way you make it work isn’t by building the most sophisticated model in the lab. It’s by training on what actually happens in the world and letting that compound over time.
That’s not a workaround for something better coming later. That’s the most rigorous, defensible path to the next generation of mobility.


