Categories: Technology

The Dilemma of Ethics in Automatic Cars

For most of us, whenever we hear the word robot, our first thought is about the various action movies wherein robots saved the day and the earth by fighting vicious villains. In real life, while robots might not be able to live up to their on-screen persona, they have become an important part of our world supported by the advent of artificial intelligence. By definition, a robot is a machine which at times might look like a human and is designed to perform a series of complex tasks. Robots are not only quicker, but they are programmed to process tasks at a much faster pace than humans. Over the last few decades, the use of robots has revolutionized several industries which includes medicine, food processing and automobiles.

A Google self-driving car at the intersection of Junction Ave and North Rengstorff Ave in Mountain View.

American writer Issac Asimov is credited with introducing three laws which describe how robots should behave in a situation:

  • The first law says that a robot cannot harm a human being or let a human be harmed through its inaction.
  • The second law states that a robot must follow the directions of a human being unless they violate the first law.
  • The third law states that a robot is inclined to protect itself so long as its actions are not in conflict with the first and the second law.

These three tenets have been subjected to debates, with many groups saying that Asimov’s laws lay out a quasi-ethical guide for robots and they might not hold true in the real world. For instance, cruise missiles and smart bombs violate the first and third principles. At the same time, ethics by themselves are a very subjective topic. If we go by the dictionary, ethics are a set of principles that dictate how a person or a society lives. They may evolve from time to time and often dominate a person’s moral philosophy. The implementation of automation in the automobiles has been questioned on ethical grounds for years.

The use of automation in the automobile sector could be traced back to the 1920s when a number of experiments to test how mechanization could boost performance cars were launched. However, it wasn’t until the 1980s when the world was introduced as the first truly self-driven car by Carnegie Melon University. The American university’s School of Computer Science rolled out the autonomous car Navlab in 1984. The model has undergone several modifications over the years. Other auto giants like Mercedes Benz followed suit with the Eureka Prometheus project in 1987. It is often touted as one of the biggest research and development projects for self-driving cars. Over the last few decades, the self-driven car sector has garnered a lot of interest with companies like Audi, Volvo and Bosch making inroads into the segment.

As car companies try to stoke consumer sentiments in favour of driverless cars, they are banking on self-driven automobiles’ promise of lower emissions and fewer accidents to woo the world. The argument for lesser traffic on roads only strengthens the argument. Even statistics tend to favour the hypothesis of these auto companies. About 1.35 million people die due to road accidents every year, that’s 3,700 deaths on an average every day. If the status quo continues, road accidents might be the fifth biggest cause of deaths. According to the National Highway Traffic Safety Administration, human error is a leading cause behind road accidents. As self-driving cars are pre-programmed, they could do a lot to make roads safer. Talking about lower emission, the numbers do not fail. A study by US Department of Energy predicts that automated cars can cut down fuel consumption in transportation by around 90%.

If you are looking to decode a driver-less car, you should know that it is just a few features short of a supercomputer. The data generated by the sensors and cameras give these vehicles an edge over their human-driven counterparts. The engine is usually fitted with a sensor which will alert the owner of any car parts need to be replaced. There will be a camera which will locate parking space for you. A self-driven car produces around 1GB data per second. Experts predict that these cars can save car manufacturers and insurance firms millions in claims by preventing road accidents.

Driver-less cars work on an algorithm that processes a wide array of data compiled by sensors, radar, and cameras among a host of mechanisms based on machine learning. Even though car models may differ, most driverless systems map surroundings with sensors, cameras, and lasers. The data compiled by them is processed by the software, which then sends signals to ‘actuators’ which are responsible for functions like steering, acceleration and applying brakes. Codes help the automated car while navigating and following traffic rules.

As much as a boon they are considered for the future of transportation, self-driven cars do have their drawbacks. For instance, car experts have questioned if the data processed by these automobile’s algorithms are safe from hacking and cyber theft.

Many other experts have questioned carmakers’ claims about self-driving leading to lower fuel consumption and emissions. Moreover, the debate over who is to blame in case of an accident has put the ethical side of driver-less operations under global scrutiny. In 2018, a woman in Arizona in the US was run over by a Volvo SUV which had been modified to run on self-driving technology. A few days later, a Tesla SUV running on driverless technology in autopilot mode rammed a road divider in California. These incidents might be isolated, but they do pose a dilemma– who is to be blamed in case of an accident? The auto majors whose cars were involved in the accident have blamed both human errors as well as technical glitches for the accidents. Alphabet subsidiary Waymo which is also working on driver-less car solutions has voiced its objection against systems where the control over a vehicle is shared back and forth between an algorithm and a human driver.

Alternatively, experts have also suggested a blockchain-based method that scans the data collected by sensors of a self-driving car to determine liability. Experts have suggested shared liability on the basis of the sensor data which monitor functions like acceleration and navigation. Using the blockchain framework, the liability would be on manufacturers in case of a design fault and on the software provider in case of a bug. In case the software isn’t updated, the liability would be that of the owner.

Image credit

Puskar

Editor in chief @GreenCleanGuide.com

Disqus Comments Loading...
Published by

Recent Posts

Are Orange Peels the Key to Recycling Lithium-Ion Batteries?

While orange peels and lithium-ion batteries may seem like a farfetched combination, it's important to keep an open mind. Read More

2 days ago

A Brief History of Flu: The Most Dangerous Silent Killing Disease of All Time

With the world fighting coronavirus and its associated strains, what has yet again hogged the limelight is that most virus… Read More

2 days ago

August 2020: Monthly Environmental News Roundup

1. Is there life floating in the clouds of Venus? The theory that living creatures float in earth Venus clouds… Read More

3 days ago

Here’s all you needed to know about vermicomposting

You might have heard many people saying that you are what you eat, which is true to a great extent.… Read More

1 week ago

EMO Bikes to launch new electric bikes

As the world gets more concerned about the impact of transport on the environment, more and more companies are coming… Read More

1 week ago

EPI 2020 Reveals India Is Yet To Achieve Sustainability

Out of 180 countries, India ranked 168th in the 2020 Environmental Performance Index (EPI). It maintained that India’s decarbonization agenda… Read More

2 weeks ago