Blog

Yann LeCun Joins Nexar’s Board of Directors

Bridging World Models and Real-World Deployment in Applied AI

Luc Vincent, Chief Research & Development Officer

January 23, 2026

Blog post

I’ve known and worked with Yann LeCun for more than two decades, across different chapters of our industry. We’ve debated the same question for years: what does it take for AI to operate reliably in the real world?Yann’s answer has been consistent: intelligence requires world models learned from reality, not just pattern matching on curated data. At Nexar, we’ve built our platform around that belief. BADAS, our real-world incident prediction system, is grounded in V-JEPA principles and trained on billions of miles of real driving experience.That alignment is a big part of why Yann chose to join Nexar’s Board of Directors. This post explains the technical “why.”

For much of the past decade, progress in artificial intelligence was driven by advances in model architecture, training scale, and compute. These advances produced impressive gains in perception and prediction, particularly in controlled or well-curated environments. But as AI systems increasingly operate in the physical world, a different limitation has become dominant.The constraint is no longer model capacity.
It is exposure to reality.

From pattern recognition to world models

Most deployed perception systems today still rely heavily on pattern matching. They detect objects, classify scenes, and react to what is immediately visible. While effective in narrow domains, these approaches struggle when systems encounter rare events, ambiguous signals, or situations that deviate from the training distribution.

This limitation becomes critical in real-world deployment. In physical environments, the scenarios that matter most for safety and reliability are precisely those that occur infrequently and cannot be exhaustively enumerated in advance. These are not edge cases in practice; they define system performance.

This is why the concept of world models is central to the next phase of applied AI. Rather than predicting pixels or labels, world models aim to capture the underlying structure of the environment: how objects persist, interact, and evolve over time according to physical constraints and causal relationships.

Learning in latent space, not pixel space

At Nexar, this perspective is one of the reasons Yann joined the Board and is embodied in BADAS, our real-world incident prediction model. BADAS is built using V-JEPA (Video Joint Embedding Predictive Architecture), an approach pioneered by Yann LeCun to shift learning away from pixel-level reconstruction and toward prediction in latent space.

Instead of attempting to predict every pixel in future frames, BADAS learns compact representations that encode the state of the physical world. Predictions are performed directly in this latent space, allowing the model to ignore irrelevant visual noise and focus on what matters: object motion, interactions, and physical dynamics.

This distinction is critical. Pixel-level prediction entangles appearance with structure. Latent-space prediction separates the two, enabling the model to learn the “rules of the game” rather than memorizing visual patterns.

Why real-world data changes everything

World models only become meaningful when they are trained on sufficiently rich and diverse experiences. This is where real-world data becomes the limiting factor.

BADAS is trained on billions of miles of authentic driving data captured continuously across geographies, road types, traffic patterns, and environmental conditions. This exposure enables the model to learn physical regularities and causal relationships that cannot be fully captured in simulation or synthetic datasets.

As a result, BADAS exhibits strong zero-shot generalization to out-of-distribution (OOD) scenarios. Because the model learns how objects behave in the physical world, rather than what they look like in specific contexts, it can identify risk in situations it has never explicitly encountered during training.

This capability is essential for deployed systems. Real-world environments are non-stationary, adversarial, and shaped by human behavior. Models that rely on narrow distributions inevitably degrade when conditions shift. Models grounded in physical structure degrade far more gracefully.

Deployment as a learning engine

Another implication of real-world data is how it changes the role of deployment itself. In traditional pipelines, deployment is treated as the end of training. In reality, it is the beginning of meaningful learning.

Continuous exposure to real environments allows models to adapt to distribution shifts, surface failure modes that cannot be anticipated in advance, and improve through sustained interaction rather than isolated evaluation. Over time, this transforms deployment from a risk to be mitigated into a source of robustness.

This is not a replacement for simulation or offline training. It is a complement that makes them effective. Simulation explores what might happen. Real-world data reveals what actually does happen.

Toward physical intelligence

As AI systems move from research into production, success will be determined less by benchmark performance and more by how systems behave across millions of real interactions. Reliability, safety, and trust emerge from consistency under uncertainty, not from average-case accuracy.

Want to dive deeper?

Nexar helps industry leaders turn vision into value. Dive into real-world success stories where our data redefined the road ahead — and see what it can do for you.

Discover Our Content Hub

Table of contents