Tesla recently released crash data on their Supervised FSD system, reporting the average number of miles driven per collision for cars in different configurations and on different roads. This replaces the notorious Autopilot data Tesla has released for several years which has been widely criticized for being highly misleading. Tesla has corrected a number of the errors in their methodology, and now claims cars driving with FSD on have almost 1.5x as good a safety record globally on city streets as similar Teslas not using FSD, and 4 times better than very old Teslas that lacked ADAS collision warning systems and other active safety tools. The record is almost 2x better in North America, and over all road classes.
Tesla’s old reports made the false claim that driving with Tesla Autopilot was 10 times safer than driving an average car. They compared Tesla crashes primarily on freeways which triggered an airbag with general police reported crashes for the whole population. This would be expected to be around 5-6 times better, since most crashes don’t trigger an airbag, there are far fewer crashes per mile on freeways, and Tesla drivers are of a safer driving demographic. As such, the old comparison used misleading math to make Tesla Autopilot look much better than it really is. I was the first to point this out, many years ago.
Tesla has finally changed their approach. They have switched from Autopilot (which is >90% driven on freeways) to FSD, and also break down the numbers for highways and non-highways, and for North America vs. global results. This level of detail helps make it easier to do “apples to apples” comparisons, which are vital for this sort of research.
Tesla also breaks down the data for “major collisions” where an airbag or pyrotechnic seatbelt tightener are deployed, and minor ones where they are not. For non-Tesla cars, they don’t have this data so consider crashes where one vehicle was towed as major, though that includes many non-airbag crashes. As such, I believe the “U.S. Average” numbers cited by Tesla are not directly comparable to the numbers for Teslas, but they are much closer so it’s less further off.
That’s fine because the main question we want to examine is “Does driving while supervising FSD make you more safe or less safe?” Both are possible. Supervising a system like FSD, in the ideal, gives you two sets of eyes on the road. FSD is looking for and avoiding hazards, and so is the attentive human driver. If both do their job well, they can be safer than a human on their own.
On the other hand, tools like FSD and Autopilot are known to generate complacency. Some drivers even famously deliberately ignore the road or even put “defeat devices” on their cars to avoid the driver monitoring that is meant to discourage that. Those drivers can be more dangerous to themselves and others, since FSD is not yet able to match human driving ability on its own, unsupervised. (Even Tesla admits that and has safety drivers in their robotaxi pilot.) Even drivers who don’t deliberately ignore the road may become more complacent.
Indeed, there will be a mix of drivers. Some will be diligent and become safer with the system, others may not. There is an ethical question, even if the non-diligent drivers are less safe, as to whether we would forbid the diligent drivers from getting superior safety in order to prevent the bad apples.
As presented, Tesla’s data suggests that on balance, FSD is working and increasing safety on average. That’s presuming that Tesla has not continued its pattern of repainting the data to make themselves look better. Their past track record there is so terrible that strong skepticism on this is appropriate. In the weeks to come there will be more analysis of their methods to refine these conclusions. Indeed, Tesla in their report highlights the mostly invalid comparison with their US Average estimate, suggesting they have not given up their efforts to try to look better.
Tesla has not, as yet, provided this data for past quarters to allow us to see trend lines as Tesla FSD has gone to new versions. They promise they will release new data every quarter. Elon Musk, in his efforts to release the robotaxi version of FSD, has said that their goal is to drive much better than humans. He has further said the system will work unsupervised before the end of this year. These charts suggest FSD is doing well, but that it’s not close to being ready for unsupervised release, since FSD+Human is only doing modestly better than a human alone on city streets. They need FSD alone to do 2-3x better than a human alone for that.
The past history of robocars suggests that a very poor self-driving system combined with a human safety driver is usually pretty safe. Self-driving teams have few recorded at-fault crashes with safety drivers, even when they need interventions every few miles. (Tesla’s data includes all crashes, not just at-fault ones.) Good self-driving systems actually prevent not-at-fault crashes, as Tesla demonstrates in a video on the page for this data, showing FSD avoiding hitting a car that runs a red light at a cross street, and several other crashes that would be the fault of another driver. This is a great example of the second-set-of-eyes effect.
Stay tuned for more analysis, and for trendlines after a few quarters of this data (or if Tesla releases the old data.)
