Categories: Technology

The Dilemma of Ethics in Automatic Cars

For most of us, whenever we hear the word robot, our first thought is about the various action movies wherein robots saved the day and the earth by fighting vicious villains. In real life, while robots might not be able to live up to their on-screen persona, they have become an important part of our world supported by the advent of artificial intelligence. By definition, a robot is a machine which at times might look like a human and is designed to perform a series of complex tasks. Robots are not only quicker, but they are programmed to process tasks at a much faster pace than humans. Over the last few decades, the use of robots has revolutionized several industries which includes medicine, food processing and automobiles. 

A Google self-driving car at the intersection of Junction Ave and North Rengstorff Ave in Mountain View.

American writer Issac Asimov is credited with introducing three laws which describe how robots should behave in a situation:

  • The first law says that a robot cannot harm a human being or let a human be harmed through its inaction.
  • The second law states that a robot must follow the directions of a human being unless they violate the first law.
  • The third law states that a robot is inclined to protect itself so long as its actions are not in conflict with the first and the second law.

These three tenets have been subjected to debates, with many groups saying that Asimov’s laws lay out a quasi-ethical guide for robots and they might not hold true in the real world. For instance, cruise missiles and smart bombs violate the first and third principles. At the same time, ethics by themselves are a very subjective topic. If we go by the dictionary, ethics are a set of principles that dictate how a person or a society lives. They may evolve from time to time and often dominate a person’s moral philosophy. The implementation of automation in the automobiles has been questioned on ethical grounds for years.

The use of automation in the automobile sector could be traced back to the 1920s when a number of experiments to test how mechanization could boost performance cars were launched. However, it wasn’t until the 1980s when the world was introduced as the first truly self-driven car by Carnegie Melon University. The American university’s School of Computer Science rolled out the autonomous car Navlab in 1984. The model has undergone several modifications over the years. Other auto giants like Mercedes Benz followed suit with the Eureka Prometheus project in 1987. It is often touted as one of the biggest research and development projects for self-driving cars. Over the last few decades, the self-driven car sector has garnered a lot of interest with companies like Audi, Volvo and Bosch making inroads into the segment.

As car companies try to stoke consumer sentiments in favour of driverless cars, they are banking on self-driven automobiles’ promise of lower emissions and fewer accidents to woo the world. The argument for lesser traffic on roads only strengthens the argument. Even statistics tend to favour the hypothesis of these auto companies. About 1.35 million people die due to road accidents every year, that’s 3,700 deaths on an average every day. If the status quo continues, road accidents might be the fifth biggest cause of deaths. According to the National Highway Traffic Safety Administration, human error is a leading cause behind road accidents. As self-driving cars are pre-programmed, they could do a lot to make roads safer. Talking about lower emission, the numbers do not fail. A study by US Department of Energy predicts that automated cars can cut down fuel consumption in transportation by around 90%.

If you are looking to decode a driver-less car, you should know that it is just a few features short of a supercomputer. The data generated by the sensors and cameras give these vehicles an edge over their human-driven counterparts. The engine is usually fitted with a sensor which will alert the owner of any car parts need to be replaced. There will be a camera which will locate parking space for you. A self-driven car produces around 1GB data per second. Experts predict that these cars can save car manufacturers and insurance firms millions in claims by preventing road accidents.

Driver-less cars work on an algorithm that processes a wide array of data compiled by sensors, radar, and cameras among a host of mechanisms based on machine learning. Even though car models may differ, most driverless systems map surroundings with sensors, cameras, and lasers. The data compiled by them is processed by the software, which then sends signals to ‘actuators’ which are responsible for functions like steering, acceleration and applying brakes. Codes help the automated car while navigating and following traffic rules.

As much as a boon they are considered for the future of transportation, self-driven cars do have their drawbacks. For instance, car experts have questioned if the data processed by these automobile’s algorithms are safe from hacking and cyber theft.

Many other experts have questioned carmakers’ claims about self-driving leading to lower fuel consumption and emissions. Moreover, the debate over who is to blame in case of an accident has put the ethical side of driver-less operations under global scrutiny. In 2018, a woman in Arizona in the US was run over by a Volvo SUV which had been modified to run on self-driving technology. A few days later, a Tesla SUV running on driverless technology in autopilot mode rammed a road divider in California. These incidents might be isolated, but they do pose a dilemma– who is to be blamed in case of an accident? The auto majors whose cars were involved in the accident have blamed both human errors as well as technical glitches for the accidents. Alphabet subsidiary Waymo which is also working on driver-less car solutions has voiced its objection against systems where the control over a vehicle is shared back and forth between an algorithm and a human driver.

Alternatively, experts have also suggested a blockchain-based method that scans the data collected by sensors of a self-driving car to determine liability. Experts have suggested shared liability on the basis of the sensor data which monitor functions like acceleration and navigation. Using the blockchain framework, the liability would be on manufacturers in case of a design fault and on the software provider in case of a bug. In case the software isn’t updated, the liability would be that of the owner.

Image credit

Puskar Pande

Editor in chief @GreenCleanGuide.com

Published by

Recent Posts

Green Thumb, Marathi Style: Your Mobile’s Eco-Adventure Awaits!

Ever wished your mobile phone could do more than just send memes and crush candies? Well, buckle up, because we're about to take your mobile…

10 months ago

Environmental Impacts of Bitcoin and other Proof of Work (PoW) Mining Activities

As of 2023, 106 million people worldwide had bitcoins but the currency's influence on the environment is disproportionate. The primary method of cryptomining, known as…

11 months ago

Cultivating a Sustainable Future: Top Ten Green and Clean Habits to Teach Children

In today’s modern world, where environmental issues are becoming more and more important, teaching kids to be environmentally conscious and eco-friendly is crucial to building…

1 year ago

Top 10 House Plants for Clean Air

Our houses offer a safe haven from the outside world amid the chaos of modern life. However, common contaminants emitted by furniture, cleaning supplies, and…

1 year ago

The Environmental Impact of Scrap Metal Recycling

Scrap metal recycling stands as a cornerstone of sustainable waste management, playing a pivotal role in mitigating the environmental impact of metal production and waste…

1 year ago

Say Goodbye to Fuel Poverty by Applying For Free Energy Upgrades

If the worry of affording your next energy bill is giving you sleepless nights, you’re not alone. Millions of people in the UK have this…

1 year ago