It was still dark on a Friday morning in November when a California Highway Patrol officer started following a Tesla Model S on Route 101 between the San Francisco International Airport and Palo Alto. The gray sedan was going 70 miles per hour with a turn signal blinking, cruising past multiple exits. The officer pulled up alongside and saw the driver in a head-slumped posture. Lights and sirens failed to rouse him. The car, the officer guessed, was driving itself under the control of what Tesla calls Autopilot.
Every Tesla is equipped with hardware that the automaker says will enable its vehicles to be capable of driving themselves on entire trips, from parking space to parking space, with no input from the driver. At the moment, the company limits its cars to a system that can guide them from on-ramp to off-ramp on highways. The system is smart enough, it seems, to keep the Tesla driving safely even with a seemingly incapacitated driver, but not yet smart enough to obey police sirens and pull over.
This case appears to be the first time law enforcement has stopped a vehicle on an open road under the control of an automated system. There was no way for police to commandeer the driving software, so they improvised a way to manipulate Tesla’s safety programming. A highway patrol car blocked traffic from behind while the officer following the Tesla pulled in front and began to slow down until both cars came to a stop.
The incident encapsulates both the high hopes and deep anxieties of the driverless future. The Tesla’s driver, a 45-year-old Los Altos man, failed a field sobriety test, according to the police, and has been charged with driving under the influence; a trial is scheduled for May. The car, which seems to have navigated about 10 miles of nighttime highway driving without the aid of a human, may well have saved a drunk driver from harming himself or others. Neither Tesla nor the police, however, are ready for people to begin relying on the technology in this way.