How driverless cars manage amber lights

Published Mar 16, 2016

Share

Washington - Google's self-driving car knows how to run amber traffic lights.

But unlike regular humans, it doesn't have to guess whether it’ll beat the countdown. Instead, it turns what for many of us is a split-second decision based on experience and guts into a heavily calculated dance of probability, speed and trajectory.

Roughly 70 metres before it hits an intersection, it scans the light with its cameras and, based on the thousands of other data points it's tracking in its surroundings, the car will make the right call.

Google car ‘partly at fault’ in crash

I saw some of this stuff first-hand on Sunday during SXSW, when I met a team from Google - sorry, Alphabet - for a private demo in north Austin. Here's what it's really like to use the technology that's poised to reshape how we commute, run our businesses and even design our cities.

TAKING A RIDE

The first thing you notice when you get into one of Google's modified hybrid Lexuses is a long, black bar sitting on the dashboard. This displays the time - in the iconic Roboto typeface you see on Android smartphones - as well as a graphical representation of the car and its surroundings.

This isn't your average GPS navigation screen, though. This one rendered buildings and obstacles as tiny, shimmering white dots, as if they were cobwebs that had clumped together to form little rectangles and squares. And they constantly refreshed based on what the car's sensors saw. As we began moving, new readings added new dots to the screen, while others faded away. The overall impression was one of a constantly undulating ocean of data - which, of course, it was.

We paused at the edge of the parking lot before the Googler behind the wheel, Gavino Nastor, warned me he was about to turn on the autonomous mode.

“You'll just have to trust that my feet are off the pedals, okay?” said Nastor. The whimsicality of his corporate title - Road Warrior - seemed to belie a deadly seriousness about what Google was trying to do to the car industry.

I was told I couldn't take any pictures or video from the inside, so I'll have to make do with written descriptions.

To the left of the steering column, there's a button to enable the car's autonomous drive, and a button that Google calls its “speed rocker” - a vertical bar that lets the driver manually accelerate the car to match the speed of traffic while autonomous mode is turned on. Push up to go faster; push down to go slower. On the steering wheel, there's an “on” button and an “off” button. Nastor pressed the “on” button, and away we went.

HASN’T MASTERED THE “ART” OF DRIVING

As we glided into a left turn and onto West Anderson Lane, a bustling four-lane thoroughfare, one thing became immediately clear: The car brilliantly grasps the basic mechanics of driving, but it hasn't yet mastered the art of it. It's got a bit of a lead foot, so your head gets thrown back a bit as it accelerates like a teenager with a new license. Same with the braking: Where experienced drivers know to lift their foot off the pedal slightly at the end to avoid giving themselves whiplash, Google's car brakes rather aggressively.

Google expects these quirks to be ironed out in the long run. Part of the reason why the car accelerates so quickly is because it tends to wait a second after a light turns green to allow any red-light runners or crossing pedestrians to clear out. Safety first, human-like later, Nastor said.

You may notice that the car I rode in isn't the pod-like prototype car that Google has also been working on. (Nastor said other testers have likened the prototype to toasters, koala bears and marshmallows.) But although Google's long-term vision is to make the steering wheel obsolete - it's safer to do so, company officials have argued, because humans are fallible - realistically there may need to be some intermediate step that eases people into that future. And if so, then the modified Lexus would represent a reasonable first stab at it.

Other than that slight choppiness, the rest of the ride went remarkably smoothly. We tooled around small residential streets - which represent some of the toughest conditions for Google's computer because of all the random events that can occur in those zones - and the Lexus glided around parked cars, yielded for pedestrians and even did the equivalent of nodding to oncoming cyclists. They couldn't see that, of course, but a laptop hooked up to the car showed that it acknowledged two green squares on the other side of the street.

READING THE ROAD IN COLOUR

The colours that you sometimes see in those Google screencaptures really mean something. Pink indicates objects that are being detected by the car's lasers; blue means objects that are being picked up by radar; and green means that the lasers and the radar readings match, or “cross-validate.”

Even without lane markers in the residential area, the Google car was able to stay centred thanks to all the other readings it was gathering from the environment. And none of the data it was using had been drawn from Google Maps or Google Street View. Google Maps isn't helpful because the car has to physically drive the preplanned route a couple of times before it's ready to perform the task on its own. And Street View images don't provide enough depth - they offer only a 2-D image whereas the car needs a constant 3-D picture.

Jennifer Haroon, head of business for Google's self-driving cars, explained that there's another reason the cars don't talk to Google Maps or the rest of the Internet.

“That was a security decision,” Haroon said. The less insulated a car's driving systems are, the more exposed it is to potential hackers. And with these cars making life-or-death calls on behalf of human beings, that's a dangerous thing.

As we pulled back out onto the main road, the engineers explained that Google's car comes with two routing programs - a “long-term router” that acts much like the GPS device on a normal car, and another, short-term router that makes decisions on when to speed up, slow down, turn and execute other maneuvers. Most of the magic in a driverless car happens in the short-term router, calculating predictions 10 times per second. If the short-term router determines that there's an accident ahead or it's more efficient to take a different route, the long-term router adapts.

“It's always going to follow the short-term router's decisions,” the engineers said.

BUILDING TRUST

Our ride took a little less than 15 minutes. But even in that span of time, I realized I'd spent more of my attention asking questions than worrying whether the car was going to roll out into an intersection without looking. My eyes had mostly been glued to the laptop displaying technical data about the car, because what I was feeling in my seat, and seeing out the window, seemed completely normal.

Within minutes, I was already trusting the car - partly, of course, thanks to the fact that I had Google engineers with me who seemed pretty relaxed themselves. The average consumer won't have the benefit of someone holding his or her hand when driverless cars reach the market. But the best technology leaves you wanting more - and that's exactly what this did.

The Washington Post

Like us on Facebook

Related Topics: