Self-Driving Cars: Role and Methods of Accident Ability Diagnosis

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

The automotive sector is an area where technology is developing very rapidly. Over the past decade, the evolution of self-driving cars has accelerated and achieved great popularity. Self—driving cars use a wide range of sensors to understand the environment around the vehicle and use this information to navigate the streets to their destination – without human help. Most vehicles on the roads today have some form of automation. Since self-driving cars can theoretically react faster than human drivers and do not get behind the wheel drunk, do not write text messages while driving, and do not get tired, they should significantly improve the safety of vehicles. However, the self-drive automotive industry’s biggest problem is handling unforeseen situations and vulnerability to hacking systems.

Self-driving cars cannot always adequately assess the situation and, as a result, react correctly. The moral dilemma is based on the idea that artificial intelligence in self—driving vehicles cannot conclude several favorable outcomes – or, for that matter, the “least bad” effects. A classic example is an autonomous vehicle that decided to drive off the road, possibly killing the driver inside, to avoid a collision with a school bus full of children (Stilgoe, 2021). People use common sense to cope with unexpected driving phenomena: a deer runs out on the highway. Flooding makes it difficult or impossible to move along the road. Cars are floating, trying to climb the icy hill (Lobanova & Evtiukov, 2020). Unfortunately, no one knows how to embed common sense into cars or computers. Instead of common sense capabilities, ADS developers should anticipate and code every possible situation. Since self-driving cars obey all rules and regulations, an individual vehicle and a more significant traffic flow can move slower and less organically. These cars were described as student drivers: slow, conservative, and timid (Stilgoe, 2021). Machine learning can only help if manufacturers anticipate every situation and provide training examples for every possible situation. Thus, self-driving cars are not always able to move. Compared with a person, they lack such a thing as the human factor, which is sometimes very useful on the roads.

Another problem for self-driving cars is that computer vision systems are prone to errors since they can be deceived in a way that people cannot be deceived. For example, researchers have shown that minor changes to the speed limit sign can cause a machine learning system to think that the sign says 85 miles per hour, not 35 miles per hour (Stilgoe, 2021). Similarly, some hackers tricked the Tesla autopilot into changing lanes by using brightly colored stickers to create a fake route (Stilgoe, 2021). In both cases, these changes deceived cars but not people. Hackers who get into the car’s software and control its operation or influence its function can become a severe problem. These are just a few ways an attacker can confuse cars or trucks, forcing them to move off the road or collide with obstacles. In addition to particular attempts to deceive autopilot systems, sometimes errors occur in the natural environment. The first accident involving a Tesla electric car, in which a person died, occurred on 2016, on a highway in Florida. Then a Tesla Model S sedan drove under a semi-trailer truck. “Neither the autopilot nor the driver noticed the white side of the semi—trailer against the background of the brightly lit sky, so the brakes did not activate,” Tesla commented on the tragedy (Stilgoe, 2018). Thus, self-driving cars cannot be entirely relied on since their systems can be deceived.

The most important aspect of autonomous cars is accident liability. Who is responsible for an accident caused by an unmanned vehicle? In the case of autonomous cars, the software will be the main component that will drive the vehicle and make all the crucial decisions. While in the initial projects, a person was physically behind the wheel, in the new projects demonstrated by Google, there is no dashboard or steering wheel (Stilgoe, 2021). In such constructions, where there are no controls in the car, such as the steering wheel, brake pedal, or accelerator pedal, a person in the vehicle will not be able to drive the car in the event of an adverse incident. In addition, due to the nature of autonomous vehicles, passengers will mostly be relaxed and may not pay close attention to traffic conditions. In addition, as drivers get used to not getting behind the wheel, their skills and experience will decrease. If they ever need to drive a car under certain circumstances, there will be problems. Automakers have reported nearly 400 car crashes with partially automated driver assistance systems, including 273 involving Tesla vehicles (Stilgoe, 2021). Thus, there are legal gaps in the use of unmanned vehicles.

There is also an opinion that self-driving cars are safer than people-driven cars. A person causes the vast majority of road accidents. After all, most accidents were caused by some human error, whether it was speeding, reckless driving, inattention, or, even worse, drunk driving. It is estimated that fully automated vehicles can reduce the number of road accidents by 90% (Lobanova & Evtiukov, 2020). On the other hand, self-driving cars are purely analytical, relying on cameras, radars, and other sensors for navigation. However, for example, in actual driving, many Tesla owners report that shadows, for instance, from tree branches, are often perceived by their cars as natural objects (Nees, 2019). In the case of the Uber test car that killed a pedestrian, the car’s object recognition software first classified the pedestrian as an unknown object, then as a vehicle, and finally as a bicycle (Nees, 2019). In 2018, a Tesla Model X crossover in California went off the highway and crashed into a bump during unmanned mode and adaptive cruise control (Stilgoe, 2021). Thus, self-driving cars are still not more attentive on the road than a car driven by a person.

There are good reasons to believe self-driving cars will be safer than human drivers. They never get tired, do not write messages at the wheel, and do not get behind the wheel after drinking. However, at the moment, there are more compelling reasons to assume that self-driving cars will not be safer. No one knows how to embed common sense into computers. Many facts demonstrate that some types of autonomous vehicles cannot cope with unforeseen road situations. Moreover, they are also more vulnerable to hacking systems and generally more prone to errors due to incorrect data analysis. Also, there is a moral dilemma regarding regulating road traffic accidents with unmanned vehicles. Thus, it can be concluded that, at this point, self-driving cars are not safer than cars with a person at the wheel.

References

Lobanova, Y., & Evtiukov, S. (2020). . Transportation Research Procedia, 50, 363-372. Web.

Nees, M. A. (2019). . Journal of Safety Research, 69, 61-68. Web.

Stilgoe, J. (2018). . Social Studies Of Science, 48(1), 25-56. Web.

Stilgoe, J. (2021). Ethics and Information Technology, 23(4), 635-647. Web.

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!