Charging a Tesla Model 3 in Burbank, Calif., April 25, 2019. Elon Musk, Tesla’s chief, insists that its coming “full self-driving computer” will handle any task. (Philip Cheung/The New York Times)
“Sometime next year,” Elon Musk says, “you’ll be able to have the car be autonomous without supervision.”
“None of us have any idea when full self-driving will happen,” counters Gill Pratt, a leading expert in robotics and director of the Toyota Research Institute.
Beyond Musk, who has said twice this year that Tesla could have a million “robotaxis” on the roads next year, few experts in autonomous cars believe that the technology is ready to safely chauffeur occupants in any and all driving conditions. And that’s before the regulatory hurdles, including a quaint-seeming 1971 New York law that requires at least one hand on the wheel.
Instead, for the foreseeable future, there are Advanced Driver Assistance Systems. Think of them as a co-pilot, not the Autopilot of Tesla’s marketing parlance but a wingman that amplifies human skills instead of replacing them.
These building blocks of autonomy are becoming common on even the most affordable cars: electronic stability controls, certainly, but now radar, cameras and other sensors that perceive their surroundings and automatically accelerate, stop, steer, follow lanes or take evasive action. And every major carmaker in America has pledged to make automated emergency braking standard on all new models by September 2022.
Global giants like General Motors, Toyota, Ford and Volkswagen are fully engaged in the self-driving race against the likes of Tesla, Uber and Waymo, a unit of Google’s parent company, and are loath to be outmaneuvered by Silicon Valley disrupters. But traditional automakers are also hitting the brakes, as premature promises run headlong into reality — what Pratt calls the current “trough of disillusionment” in autonomy.
A growing consensus holds that driver-free transport will begin with a trickle, not a flood. Low-speed shuttles at airports or campuses may be the early norm, not Wild West taxi fleets through Times Square. Operational boundaries will be enforced by the electronic leash of geofencing.
Toyota is among the many companies backing that more cautious, two-track approach. Pratt, who ran the vaunted robotics program at the Defense Advanced Research Projects Agency, or DARPA, recalls tossing and turning on the night in 2015 when he signed a contract to lead Toyota’s $1 billion research arm for artificial intelligence and robotics.
Toyota’s cars alone, he figured, log perhaps 1 trillion miles of annual travel around the globe. Making a robocar perform in controlled demonstrations is easy, Pratt says, such as having it effortlessly avoid hay bales tossed in front of it. Making a robocar so foolproof that consumers and automakers can trust it with their lives, including in one-in-a-billion situations, is very different.
“Ever since, we’ve tried to turn down the hype and make people understand how hard this is,” he said.
That’s not preventing companies from trying. Toyota’s Chauffeur technology fully intends to create autonomous cars for corporate fleets. But using 80% to 90% of the same software, its Guardian concept blends inputs from man and machine.
GM’s Cadillac is also working to keep humans in the driving loop — even if it requires an occasional slap on the wrist, via the driver-monitoring system developed by an Australian company, Seeing Machines.
Consider Cadillac’s Super Cruise the digital disciplinarian that makes drivers sit straight and keep eyes up front. It is GM’s consumer answer to Tesla’s Autopilot, but its approach illustrates the divergent philosophies of traditional automakers and the Valley rebels.
Many experts say Super Cruise, or a system like it, might have prevented the highly publicized fatal crashes of some Tesla Autopilot users, or Uber’s robotic Volvo that struck and killed an Arizona pedestrian in March last year. In the Uber case, police investigators said the human backup driver had been streaming Hulu before the accident. In some Tesla crashes, driver overconfidence in Autopilot’s abilities, leading to inattention, appears to have played a role.
That kind of carelessness isn’t possible with Super Cruise, as my own testing on Cadillac’s CT6 sedan has shown. The optional system will expand to other Cadillac models next year. Unlike Tesla’s current Autopilot, the system is explicitly designed for hands-free operation, allowing people to drive safely without touching the steering wheel or pedals — but strictly on major highways.
Using laser-based lidar, Detroit-area company Ushr mapped 130,000 miles of freeway in the United States and Canada, in deep detail. That map is stored onboard the car and updated monthly over the air to account for new construction and other road changes. The maps fix the Cadillac’s global position to within 4 inches, backed by onboard cameras, radar and GPS.
When I drove the Cadillac outside its geofenced borders, self-driving was strictly off-limits. But once on its proper turf, Super Cruise breezed along highways in New Jersey for up to two hours with zero input from me.
It’s an odd sensation at first. But the Cadillac tracked down its lane as if it was on rails — better than the average Uber — so that I quickly gained confidence, eventually leaning back with hands folded behind my head as we zipped between semitrailers.
An infrared camera and lighting pods tracked my face, eyelids and pupils. The system let me look away long enough to, say, fiddle with radio stations. But if I closed my eyes or dared to text, the Caddy flashed escalating warnings. Putting eyes back on the road allowed me to proceed.
Ignore more prompts, and the system shuts down, refusing to work with a distracted driver. If that driver is disabled or asleep, the Caddy can pull over, stop automatically and call for help.
“What I love about Super Cruise is that it’s always watching you,” said Chris Thibodeau, Ushr’s senior vice president.
The system also disengaged when it couldn’t confidently identify lane markings or when it approached construction zones. While those cautious disengagements could be frustrating at times, Super Cruise proved a trusty co-pilot that prevents overconfidence from either party.
“The last thing you want is the machine making a judgment that would be better done by a human,” Thibodeau said.
Experts add that driver monitoring systems would be a boon to safety even in conventional situations. For one, parents could rest assured that teenagers weren’t texting while driving.
Designing skill amplifiers for automobiles, Pratt noted, is infinitely complex, in part because of the crowded and varied roadways that cars must perceive, predict and react to: what he calls the “complex ballet” of driving.
It doesn’t help that human drivers can be the weak dance partner. Roughly 1.3 million people die in global auto accidents every year, according to the World Health Organization. Human error is blamed in 94% of those deaths.
While Pratt is a champion of modern robotics, he said artificial intelligence would still take decades to rival some human abilities.
“We shouldn’t have this replacement mindset to pop out the human and pop in the machine,” he said. “Sometimes the AI is better than the human. Sometimes the human is better than the AI.”
The brain gives people one advantage, in predicting behaviors based on visual cues. Pratt offered the example of a driver cruising through intersections where various pedestrians wait to cross: an older person, a mother holding a child’s hand or a group of teenagers. A human driver will instantly process the scene and know that the teenagers are most likely to jaywalk.
“The AI system, unless it’s fed with hundreds of millions of examples, can’t pick that up, because it doesn’t think. It just pattern-matches,” Pratt said.
In the robot’s corner, it never gets tired or drunk and has 360-degree sensor “vision.”
Musk has dismissed any need for a driver monitoring system on Teslas or redundant hardware sensors, insisting that its coming “full self-driving computer” will handle any task.
That stance is drawing an unusual backlash against Tesla from industry analysts, from skepticism that Tesla can pull it off to charges that the company is cutting corners on safety.
My tests of various semiautonomous systems highlighted what experts call a paradox of self-driving: As the technology gets better, it may initially become more hazardous because drivers are sidelined for longer periods, lulled into a false sense of security.
“It’s a whole new paradigm for the manufacturers: How do I keep drivers engaged, what are the right alerts?” Thibodeau asks.
“People have been trained for years to pay attention to everything on the road. It’s going to be hard to change that behavior and trust the machine.”
For people who envision the government coming for their car keys, Pratt has a message: The rise of the machines is real, but most people will choose personal autonomy over an autonomous car.
“The joy of driving a car is something that is incredibly innate and precious, and we don’t think that’s under threat at all,” he says.
©2019 New York Times News Service