Many Are Still Not Sold On The Safety Of Autonomous Vehicles
Imagine a morning commute where instead of keeping a close watch on the traffic around you, your car picked up that duty. Google, Tesla, and many other car manufacturers have gotten far in developing consumer-ready autonomous vehicles. The technology is revolutionary, but many are still concerned about the safety of these cars. Findings from a Deloitte study in January revealed that 74 percent of Americans did not feel self-driving cars would be safe for the public. Also, since 2014, only four percent of respondents to a similar study expressed interest in self-driving cars. The most prominent barrier to the widespread acceptance of autonomous vehicles is the public trust. However, a recent study may prove how manufacturers can bridge the gap between mistrust and acceptance.
Statistically People Feel More Comfortable With Machines That Have “Human-Like” Traits
A new study revealed that consumers are more likely to trust anthropomorphic vehicles, which possess “human-like traits.” Four researchers published their findings in the Journal of Experimental Social Psychology. Participants were given the option of operating three different driving simulators: a standard car, an autonomous vehicle with no human features, and a self-driving vehicle with a name, gender identity, and voice. Participants overwhelmingly preferred the latter, even when the autonomous car with the human-like features was hit by another car in the simulation, participants still trusted this one more than the others. They also had slower heart rates while maneuvering the car, a sure sign of an increased comfort level.
Apple, Google and Amazon Figured This Out A Long Time Ago
This does not seem too far-fetched, as many companies have recently assigned human-like qualities to technology to encourage adoption. One primary example is the Apple iPhone’s Siri. It was slated to be a “personal assistant,” to users, but its development shows that human features matter for people to adopt the technology. Siri was not given a monotone or robotic sounding voice; users were given the option to choose if they wanted Siri to sound like a human woman or man. It could be customized to address users by their name and answer any questions asked. Google and Amazon also recognized this and developed similar systems, each with the capability to sound just like an average human. It is no secret that these brands figured out that the more human features that are given to machines, the more likely people are to adapt to it. It looks like the automotive industry has also figured out this formula.
Are There Ethics Issues?
People are more likely to see some humanity in machines that have human-like features. However, the ethical question that arises is, “Who is responsible if an accident occurs? The human or the machine?” Does technology such as these cause consumers to have a false sense of security because of the inviting nature of the car sounding human? Also, could this be a distraction for drivers? There are a lot of questions surrounding this issue, but the fact remains that we could be seeing autonomous vehicles among us a lot sooner than we thought. Recent legislation passed by a U.S. Senate panel barred states from imposing regulatory measures on self-driving vehicles, a move set to speed up their use on U.S. roads. State can set rules on registration, licensing, liability, inspections, but performance standards are off limits. A conversation about how this technology should be presented to consumers needs to take place, and the issue of how “human-like” self-driving vehicles interact with consumers should be a significant factor in the dialogue. Studies like the one done concerning Iris need further exploration because there is a massive difference between a hand-held personal assistant and a self-driving automobile personality. The next few years will spur a lot of conversations, and with issues as serious as these, they should.