When driverless cars and public policy collide

When driverless cars and public policy collide
Public safety is expected to dominate the discussion regarding self-driving vehicles on public roads. Created by Fanjianhua - Freepik.com

A recent deadly accident in Arizona involving a self-driving Uber vehicle and the company’s decision to halt their use on roads indicates the level of public concern over the introduction of driverless technology.

Even though police say the driver was not at fault and the pedestrian killed in the accident was outside of any crosswalk or sidewalk, the psychological and emotional response to such incidents may nevertheless still influence state and local policy nationally and here in Washington state.

HB 2970, a proposal passed this legislative session, creates a state work group to examine the use of self-driving vehicles on public roads and make recommendations to state lawmakers. Although the legislation has yet to be signed by the governor, many of the group’s ultimate proposals already may be shaped by events that have unfolded prior to its first meeting.

Uber’s swift response to the Arizona accident despite the driver’s innocence seems to affirm observations made by Washington Policy Center Vice President for Research Paul Guppy that public safety concerns will dominate the conversation over hard data or facts.

He told Lens that while traffic fatalities have gone down in recent years despite an increase in miles driven, “what’s interesting about driverless vehicle is it introduces this element that might make the roads more dangerous.”

It may sound counterintuitive to what many advocates claim. Driverless cars can react much faster than humans and eliminate errors that make up 90 percent of car crashes in the U.S. It’s an argument that Guppy says is “a completely valid, rational point.”

To lower these incidents, companies such as AXA Research Fund have looked at ways for people to retrain their brains to make better driving decisions.

However, not all are convinced self-driving cars are a silver bullet.

“So far, most comparisons between human drivers and automated vehicles have been at best uneven, and at worst, unfair,” writes Peter Hancock in an article at The Conversation. Hancock is an engineering professor at the University of Central Florida.

He acknowledges that if driver errors were resolved nationally, in two years it would “save as many people as the country lost in all of the Vietnam War. But to me, as a human factors researcher, that’s not enough information to properly evaluate whether automation may actually be better than humans at not crashing. Their respective crash rates can only be determined by also knowing how many non-collisions happen. For human drivers is it one collision per billion chances to crash, or one in a trillion?

“Assessing the rate at which things do not happen is extremely difficult,” he writes further. “To determine whether automated vehicles are safer than humans, researchers will need to establish a non-collision rate for both humans and these emerging driverless vehicles.”

In the meantime, policy makers can look to current tort law to decide who is legally responsible when self-driving vehicles are – unlike the Arizona collision – found at fault in an accident, Guppy said.

“It’s exactly the same thing that happened when cars first arrived in 1912,” he said. “A car hit a horse and killed the horse, so what is the responsibility of the driver to the horse’s owner? We’re not going to put the machine on trial, so therefore that’s why I think in the legal world, it’s going to come down to the owner of the car.”

That, and other limitations set by the state and local governments could confine autonomous technology to private property for use by the agricultural industry or for limited transportation systems on college campuses.

“In a more controlled environment, they might be used a lot because it solves the problem of not endangering the general public,” Guppy said. The laws could be similar to how society has responded to smartphone use while driving. “It’s a new problem, and the system is responding to that. What it’s not doing is banning the phones, because that’s impossible.”

There’s also the question of how the vehicle is tested. Car companies such as Tesla use what is called “shadow driving,” in which the car doesn’t do anything but indicates when and what action it would have taken. If an accident occurs, that data be used to improve the technology.

However, experts such as Michael DeKort say the kind of testing would take one trillion miles of driving and cost $300 billion to get to truly autonomous vehicles. DeKort is a former aerospace systems engineer, engineering and program manager for Lockheed Martin, and a member of the SAE On-Road Autonomous Driving Validation & Verification Task Force.

In a Linkedin article, he wrote that “the creation of autonomous technology will result in benefits to humankind,” but adds “those benefits may be different than what we think now.

“In order for the AI to learn how to handle every core scenario, including dangerous, complex and actual crash scenarios, they have to be driven.”

In a separate article, he argues that “autonomous levels 4 and 5 will never be reached without aerospace level simulation. Thousands of accidents, injuries and casualties will occur when these companies move from benign and easy scenarios to complex, dangerous and accident scenarios. And the cost in time and funding is untenable.”

Companies such as Waymo have already made the switch to simulated self-driving vehicle testing.

At the same time, the use of driverless technology on private land or controlled environments such as a college campus may drive innovation and solve many problems for those same cars on public roads, Guppy said. “If Microsoft has been using driverless shuttles for five years, that creates a lot of experience. It also makes the public more used to the idea.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here