The U.S. National Highway Traffic Safety Administration (NHTSA) has initiated a probe into Tesla’s self-driving software systems. This investigation involves approximately 2.4 million vehicles from various models produced between 2016 and 2024. The move represents a preliminary step that could lead to a potential recall of these vehicles.
This evaluation follows reports of four crashes linked to Tesla’s “Full Self-Driving” (FSD) technology, particularly under conditions of reduced visibility such as fog or bright sun glare. One incident tragically resulted in the death of a pedestrian, while another involved injuries. The NHTSA is focused on determining whether Tesla’s systems can adequately recognize and react to such challenging visibility conditions and if similar accidents have occurred.
It is important to note that, despite its name, the FSD system is classified by the NHTSA as a “partial driving automation system.” This clarification highlights the limitations of the technology compared to fully autonomous driving systems. The agency’s announcement came just after Elon Musk showcased the Cybercab, a fully autonomous robotaxi concept, at a recent event in California, where he claimed it would be available by 2027.
Reactions to the Cybercab reveal mixed sentiments among analysts and investors. Following its launch, Tesla’s stock has declined by 8%, and shares remained relatively stable after the NHTSA’s announcement. Unlike Waymo, which employs high-tech sensors like Lidar, Tesla primarily relies on cameras and artificial intelligence for its self-driving features, making its approach more cost-effective but also potentially less reliable in certain conditions.
As this investigation unfolds, it raises important questions about the safety and efficacy of Tesla’s self-driving technology, emphasizing the need for continued scrutiny in the rapidly evolving automotive l
andscape.