Nvidia and Uber further solidified a date the AV industry has been striving towards for years: large-scale robotaxis in regular service. The two companies plan to begin ramping a global autonomous fleet in 2027, growing toward 100,000 vehicles that will eventually roll directly onto Uber’s ride-hailing network. The backbone of the autonomous solution is Nvidia’s DRIVE AGX Hyperion 10 platform running the company’s DRIVE AV software stack, paired with a joint “AI data factory” leveraging Nvidia’s Cosmos development platform, that will train foundational AI models on “trillions” of real-world and synthetic driving miles.
“Robotaxis mark the beginning of a global transformation in mobility — making transportation safer, cleaner, and more efficient,” said Nvidia founder and CEO Jensen Huang. “Together with Uber, we’re creating a framework for the entire industry to deploy autonomous fleets at scale, powered by Nvidia AI infrastructure. What was once science fiction is fast becoming an everyday reality.”
“Nvidia is the backbone of the AI era, and is now fully harnessing that innovation to unleash L4 autonomy at enormous scale, while making it easier for Nvidia-empowered AVs to be deployed on Uber,” said Dara Khosrowshahi, CEO of Uber
Nvidia’s Platform Approach And How It Differentiates
Nvidia’s DRIVE AGX Hyperion 10 isn’t a single vehicle program—it’s a reference compute and sensor architecture platform that’s designed to enable virtually any vehicle with Level-4 autonomy (a vehicle that drives itself within defined conditions—such as specific cities or routes—without human input or oversight, and in some cases, even without a human occupant). That distinction is critical, and it gives automakers and autonomy developers a common, validated hardware platform rather than bespoke, one-off stacks that need to be cobbled together. The kit includes a qualified sensor suite (cameras, radar, lidar, ultrasonics) and runs on dual Nvidia DRIVE AGX Thor computers based on the company’s Blackwell GPU architecture that’s capable of processing modern transformer and multimodal models, each with over 1000 TOPS of INT8 compute throughput (2x that for FP4). The goal is straightforward but lofty: predictable safety envelopes, faster integration, and over-the-air upgradeability across mixed fleets.
In other words, Nvidia is trying to enable autonomous vehicles to behave a bit like enterprise IT: standardize the platform, let partners differentiate on software and service, and update/upgrade continuously.
Turning Massive Driving Data Into Smarter Autonomous Systems
Real autonomy progress is increasingly a data problem. Uber’s network provides enormous coverage of edge cases and driving data, while Nvidia’s Cosmos platform is being tuned to turn that firehose of data into foundation models tailored for driving. Vision-Language-Action models that mix perception, language reasoning, and action generation will be critical for unpredictable, human-centric scenarios where rules-based approaches break down. Nvidia also cites a very large multimodal dataset (camera, radar, lidar) spanning 1,700 hours across 25 countries to support training, post-training, and validation of these models. The takeaway here is that autonomy at scale isn’t just about vehicle count it’s also about the diversity of data required to feed the beast.
From a market perspective, this blends Nvidia’s cloud training dominance with in-vehicle edge AI that should effectively turns the car into a specialized inference node that keeps improving as the fleet learns.
AVs Take A Village And Jensen Huang Is Playing 4D Chess
This announcement is not just about Uber and Nvidia but a host of other key partners as well. Stellantis, Lucid, and Mercedes-Benz are named collaborators exploring Level-4-ready vehicles compatible with Hyperion 10 for passenger mobility. On the freight side, Aurora, Volvo Autonomous Solutions, and Waabi are building toward Level-4 trucking solutions based on DRIVE, extending Nvidia’s silicon and software roadmap from ride-hailing to long-haul fleets. Additional autonomy players Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve, and WeRide will round out the software ecosystem. The density of this roster matters because it creates shared incentives around tooling and safety processes across a common platform.
A key question is how the NVIDIA–Uber collaboration differs from the ongoing efforts by Tesla and Waymo. The distinction lies in data scale and ecosystem design. Tesla gathers extensive driving data from its customer vehicles, and Waymo continues to refine its purpose-built robotaxi fleets operating in San Francisco, Los Angeles, Phoenix, and Austin. Uber, however, brings massive ride-hail volume—billions of trips each quarter—into the equation. While only a subset of that fleet is currently equipped with AV-grade sensor suites, the scale of operations gives Uber’s network significant potential as a data and deployment platform.
By coupling that operational density with NVIDIA’s Cosmos AI training factory and standardized Hyperion 10 vehicle architecture, the partnership links large-scale mobility data with compute infrastructure built for learning. In effect, NVIDIA is not simply chasing autonomy, it’s orchestrating it, turning Uber’s network into a global proving ground. It’s a calculated, high-leverage move—the kind of 4D chess Jensen Huang is known for.
Safety: Setting The Bar, Not Just Clearing It
The most consequential, though perhaps less flashy piece of the puzzle here is Nvidia Halos, described as a cloud-to-car AI safety system, with an AI Systems Inspection Lab accredited by the ANSI Accreditation Board and a new Halos Certified program. If Halos gains traction, it could become industry shorthand for a trustworthy physical AI option—much like established certifications in other safety-critical domains. For municipalities and insurers, a recognizable certification path reduces ambiguity and speeds approvals.
Regardless, reality checks with this announcement should be kept in mind. Level-4 deployment is still gated by a regulatory patchwork of differing laws and standards, municipal permitting, operational design domains (weather, roadworks, crowded curb space), and the cost curves of sensors and compute. Even with a standardized kit, commercial uptime and rider experience will determine viability, while handovers, remote assistance, and long-tail edge cases must be handled invisibly to the passenger. These issues are all solvable, but they’re operational problems as much as AI problems.
The Reality Of Robotaxis In 2027 And The Bottom Line
Two factors make Nvidia’s 2027 robotaxi timeline plausible for me: a common reference platform that lowers integration friction for automakers and AV software teams, and Uber’s demand aggregation, which allows meaningful utilization in key cities without nationwide coverage on day one. The playbook here would be to start in permissive markets, feed a joint AI-data factory on Nvidia’s Cosmos to tighten the loop between real fleet data and models, and then expand operational design domains as safety cases are hardened. It mirrors how other autonomy categories, especially warehouse automation, have progressed from pilots to production, even if the pace varies by market. Of course, past robotaxi programs in the U.S. have faced regulatory, safety and cost headwinds, so hitting mass-market scale will still demand serious execution on multiple fronts.
In any event, this partnership once again underscores Nvidia’s efforts to operate as a backbone for autonomy, while giving Uber a credible, standardized path to scale upon and Nvidia the data and access it needs for training. If the companies can stick the landing on safety with a high-quality rider experience, 2027 won’t just be another pilot year—it could be the moment robotaxis become a real mobility option in select cities, with freight services following behind. Deployment pace will be a local dynamic, not global. That said, the industry now has an initial blueprint that aligns incentives and key player roles—and that’s usually how real scale starts.

