Recently, while on my way to the office from home, I was stuck at the traffic lights at ‘Burgerland Roundabout’ and within a fraction of a second of the lights turning green the driver of the car behind me started honking his horn.
Why? I know green means go … I don’t need a reminder.
Abdesalam Soudi, a US sociolinguist who studies human-computer interaction, recently penned an article after making a sudden left turn into his university campus just as the light turned green – while facing a driverless car.
Instead of jolting forward or honking – as some human drivers would be tempted to do – the car allowed him to go. In this case, the interaction was pleasant. How polite of the car to let him cut it off, he mused.
He started thinking about how self-driving cars will communicate with the human drivers they encounter on the road. Driving can involve a range of social signals and unspoken rules, some of which vary by country – even by region or city. How will driverless cars be able to navigate this complexity? Can they ever be programmed to do so?
We know the driverless cars are equipped with a technology called LIDAR, which creates a 360-degree image of the car’s surroundings. Image sensors can interpret signs, lights and lane markings. A separate radar detects objects, while a computer incorporates all of this information along with mapping data to guide the car.
Although ideally autonomous vehicles will be able to ‘talk’ to one another in order to allow smoother navigation and reduce crashes, this technology is still in the early stages.
But any autonomous vehicle will also need to be able to interact with traditional cars and their drivers, as well as pedestrians, bikes and unforeseen events like lane closures, emergency vehicles and accidents.
This is where things can get murky.
For example, if you’re driving at night and spot a car without its lights on, you might flash your headlights at drivers coming in the other direction to let them know. But flashing headlights can also mean ‘your high beams are too bright’ or ‘go ahead’ in situations where it’s unclear who has the right of way. In order to interpret the meaning, a person will consider the context: the time of day, the type of road, the weather. But how would an autonomous vehicle react?
There are other forms of communication to help us navigate, ranging from honks and sirens, to hand signals and even bumper stickers.
Of course, humans use all sorts of hand gestures – waving a car in front of them, indicating that another driver needs to slow, and even giving the finger when angry, although that could get you into serious legal bother in Bahrain. Sounds can communicate love, anger, arrivals, departures, warnings and more. Drivers can express total disapproval with a hard, extended hit of the horn. Of course, emergency sirens encourage drivers to make way.
But specific meaning can vary by region or country. For example, a few years ago, Public Radio International highlighted the language of honking in Cairo, Egypt, which is ‘spoken’ primarily by men. These honks can have complex constructions; for example, four short honks followed by a long one mean ‘open your eyes’ to warn someone who is not paying attention.
In the US, people tend to honk before going through a short, narrow or curvy tunnel. In Morocco, drivers perform varied honks when passing; they’ll honk once before passing to secure cooperation, again as they pass (to signal progress), and lastly after they pass to say, thank you. Yet this might be confusing – or even perceived as rude – to drivers in other countries.
Written communication also plays a role between cars and drivers. For example, signs such as ‘Baby on Board’ are supposed to encourage the drivers following these vehicles to be even more careful.
Vehicles can be taught to ‘read’ road signs, and thus presumably can be taught to recognise common warnings on bumpers.
Yet navigating construction sites or accident scenes may require following directions from a human in a way that cannot be programmed. This creates a huge opportunity for error because hand signals vary widely from region to region (and even person to person), autonomous cars could fail to recognise a signal to go or, more catastrophically, could mistakenly follow a hand gesture into a barrier or another car.
How much knowledge about our societal and linguistic values will be built into the system? How can driverless cars learn to interpret hand and auditory signals?
A 2015 article in Robotics Trends described how a bike and a Google car got stuck in a standoff when the car misread signals from the biker.
Cities (and countries) possess a variety of sociolinguistic cues. It remains to be seen if the engineers working on driverless cars will be able to programme these subtle – but important – differences into these vehicles.