Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.
Abstract
The application of artificial intelligence technology will soon permit large-scale deployment of self-driving cars for human daily lives. Self-driving cars are assumed to be safer than manually driven cars, but car collisions are sometimes unavoidable. It’s necessary to consider during the occurrence of a car accident, the ethical algorithms for different stages of the accident, which are the responsibilities and backward-looking responsibilities. Along with interests that are held by various stakeholders, seek forward-looking dilemma is produced. In other words, there is this between technology and business. So In general, this paper mainly tradeoffs the relationship between human beings and technology. Besides, the dilemma faced by self-driving cars, possible solutions will also be discussed from both theoretical and practical aspects.
Introduction
Artificial intelligence technology has been widely known by the public along with the development of society. Meanwhile, based on efficiency and convenience, the government also discusses ethical huge capital in this industry, which further increases the rate of improvement in artificial intelligence technology. Undoubtedly, the common attitudes toward artificial intelligence invests by society as a whole held a stereotype. The most news report this industry in a positive aspect with a vision for the future. People focus more on having good technical parts of technology, arguing some failures or problems are caused by unmatured technology instead of ethical problems. By considering the current trend, artificial intelligence technology has frequently entered human’s daily lives, and many products of artificial technology gradually start to replace the role played by a human in the past such as robots and self-driving cars. Especially for cars, more and more famous companies such as Google and Uber begin to participate in the invention of self-driving cars. As announced by General Motors, autonomous vehicles will be ready for the market by 2020. With required interactions with human beings, ethical problems are unavoidable to be produced and a dilemma in some ways. Therefore, in this research paper, I will discuss the causes of lead by focusing on self-driving cars and provide possible solutions and suggestions.
The Significance of Artificial Intelligence Technology used in Self-driving Cars
Before we get into analyzing the causes of the ethical dilemma faced by autonomous vehicles, it is necessary to briefly introduce the ethical dilemmas of artificial intelligence (also known as machine learning), which is the main technology used by autonomous vehicles. At present, there are three main objectives in the field of machine learning focused on by researchers: task-oriented studies, cognitive simulation, and theoretical analysis. This trichotomy of mutually challenging and supportive objectives is a reflection of the entire field of artificial intelligence, providing cross-fertilization of problems and ideas[footnoteRef:1]. With the application of these three fields, self-driving cars could make wn their according to different scenarios by imitating human beings instead of manually driving. So some people might question why autonomous vehicles should replace manually driven cars. Are self-driving cars cars’ than normal cars? We should understand the differences between human learning and machine learning decisions and better answer why machines should learn. Firstly, we cannot deny the tediousness of human learning, which is a long and slow process. to most situations, even after people learn or understand the principles, it’s hard for them to apply these In other scenarios because the human cannot compute as fast as teaching does. On the other hand, there is no copy process during human learning; conversely, for machines, “when one computer has learned it, they’ve all learned it in principle”[footnoteRef:2]. So the distinctions between machine learning and human learning lead to the purpose of artificial intelligence. As mentioned by Herbert A. Simon, who was an American social scientist from Carnegie Mellon University, artificial intelligence has two goals. [1: Carbonell, Jaime G., Ryszard S. Michalski, and Tom M. Mitchell. ‘An Overview of Machine Learning.’ Machine Learning, 1983, 3-23.] [2: Simon, Herbert A. ‘Why Should Machines Learn?’ Machine Learning, 1983, 25-37.]
“First, AI is directed toward getting computers to be smart and do smart things so that human beings don’t have to do them. And second, AI (sometimes called cognitive simulation, or information processing psychology) is also directed at using computers to simulate human beings, so that we can find out how humans work and perhaps can help them to be a little better in their work.”[footnoteRef:3] [3: Simon, Herbert A. ‘Why Should Machines Learn?’ Machine Learning, 1983, 25-37.]
As admitted by most autonomous vehicles, improving safety through avoiding crashes could be the main reason for the introduction of self-driving cars instead of human beings. For example, Waymo, Google’s self-driving cars project, states clearly on its website, “We aim to bring fully self-driving technology to the world that can improve mobility by giving people the freedom to get around programs and save thousands of lives now lost to traffic crashes”[footnoteRef:4]. Other than this, saving time for commuting, removing barriers for disabled people, and reducing the environmental impact of driving are also the benefits brought by self-driving cars, motivating more research on this technology. [4: ‘Waymo – Waymo.’ Waymo. Accessed April 22, 2019. https://waymo.com/.]
The Causes of the Ethical Dilemma
Although autonomous vehicles have good intentions, it’s possible to have some accidents. For example, on March 18, 2018, a woman was struck and killed on Sunday night by an autonomous car operated by Uber in Tempe, Ariz. It was believed to be the first pedestrian death associated with self-driving technology. The occurrence of this real accident caused by self-driving cars reminds society that it’s time for people to pay attention to ethical perspectives to consider the relationship between human beings and artificial intelligence, and that is the topic I now return——the causes of the ethical dilemma.
Self-driving cars hold the promise to the public that they are safer than manually driven cars. Yet they cannot be perfectly safe. Car collisions are sometimes unavoidable. As said by Elon Musk, the founder of Tesla, “Perfect safety is n impossible goal.”[footnoteRef:5]. According to the high-risk rate of avoidable car accidents, there is a need to consider how self-driving cars should be programmed to react to various ethical dilemmas in different scenarios, leading to the ethical dilemma of accidents for autonomous vehicles. In accidentsResearch paper “The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?” written by Sven Nyholm, he claims that “Some philosophers have recently likened accident-management in autonomous vehicles to the so-called trolley problem. Several journalists and opinion piece writers have also done so”[footnoteRef:6]. So from the philosophical aspect, the ethical dilemma faced by self-driving cars can be seen as an applied Trolley Problem. Trolley Problem is a hypothetical scenario raised by Phillipa Foot in 1967, which assumes that right before the trolley, they have been repairing the tracks, so the driver must stop the track to avoid five workers have had a collision. However, the track goes through a valley, and both sides are steep. It turns to another choice of turning the trolley on the other tract, but one workman on this track will be killed. This leads to the tradeoff between causing one death and preventing several more deaths. [5: Wilkins, Alasdair. ‘Elon Musk Explains Why Radar Is Future of Tesla’s Autopilot.’ Inverse. Accessed April 22, 2019. ] [6: Nyholm, Sven, and Jilles Smids. ‘The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?’ Ethical Theory and Moral Practice19, no. 5 (2016)]
The real meaning of the trolley dilemma can be reflected in self-driving. Whether the car is controlled by humans or humans, both of them need to make choices. In reality, when the human driver experiences an extreme situation such as a person rushing in front of the car suddenly, there is no time for him to consider the choice rationally. So the reaction of the driver is machines based on subconsciousness which leads to unpredictable results. Either no one is hurt, or the sharp turn causes a rollover or a rear end. Under most situations, the human driver will not be condemned in the moral aspect by the public. However, self-driving will be completely different compared with human control. All the reactions to,self-driving cars are set in advance by people. Once extreme situations occur, the machine will give the reaction based on the designed computer program rather than making a choice. Hence, all decisions made by self-driving cars are completely rational, leading to an ethical dilemma.
There are three main stages of ethical algorithms in car accidents: before, during, and after the car accident. These three stages generate two very different decision-making situations, which could be divided into forward-looking responsibilities and backward-looking responsibilities. For forward-looking responsibilities, it concerns who should be held responsible, and what they should be held responsible for, if and when accidents occur[footnoteRef:7]. Google gives the official response that the main objective of self-driving is to reduce traffic accidents caused by human negligence. In other words, the main purpose is to keep people safe. So the target of safety when people must consider the “trolley dilemma” occurs; the self-driving car protects the safety of the people inside the car or outside. For example, a self-driving car on car can’t turn a rough mountain path, and there are few children in front of you. To avoid hitting the children, the car has to choose between left or right. However, a car can’t turn left because it will drop from the mountain, killing the driver; if the car turns right, it will collide with a car in an inverse direction and go against the traffic rules. As a result, a dilemma is formed. If the car’s priority is the safety of people outside the car, people are less likely to purchase this kind of car because of the high possibility of life-threatening circumstances to the drivers. [7: Nyholm, Sven, and Jilles Smids. ‘The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?’ Ethical Theory and Moral Practice19, no. 5 (2016)]
This scenario can be viewed from another perspective by focusing on the traffic rules. If following the traffic rules is only harmful, is it permissible for self-driving cars to break the traffic rules under some special situations? Even when people change their perspective, the dilemma still cannot be avoided because it contradicts the purpose of traffic rules. In society, following traffic rules can be regarded as an agreed-upon expectation for people to decrease the possibility of leads traffic accidents. Therefore, all of these contribute to the ethical dilemma of self-driving cars in the stages before and during the accident.
Other than these two stages, an ethical dilemma also exists after the occurrence of a traffic accident. Backward-looking responsibilities concern moral and legal responsibility for how cars are to be programmed to deal with the various kinds of risky situations they might encounter in traffic[footnoteRef:8]. By using the extreme scenarios discussed above, the contribution of decision-making power and responsibility are also very important aspects that need to be thought about carefully. Once the emergency happens and someone is hurt or killed, who should be the person that takes responsibility? The car owners or the car producers? For this series of problems, the law doesn’t provide any useful information for the public to reference so far. Other than the non-existence of self-driving laws, security risks brought by the internet should be focused on. With the development of digital technology, future cars will depend more on the internet to update traffic data and upload data timely, especially for self-driving cars. Hence, it provides more chances for cyber hackers to find out the flaws to utilize these to achieve high-tech crimes. They even can control the brakes and steer remotely, bringing great security risks for the public. [8: Nyholm, Sven, and Jilles Smids. ‘The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?’ Ethical Theory and Moral Practice19, no. 5 (2016)]
Possible Solutions for Ethical Dilemma
By considering the causes of the ethical dilemma faced by self-driving cars, it seems impossible to eliminate car accidents at present. Tesla’s founder Elon Musk said, “It’s really about improving the probability of safety – that’s the only thing possible”[footnoteRef:9]. Some possible solutions to the ethical dilemma are required for the public to think Asand discuss. about Toways some extent, the ethical dilemma of autonomous cars accident could be regarded as an applied Trolley problem. We could use more theoretical or abstract ways to solve the problem. In the essay “Solving the Trolley Problem,” Joshua D. Greene, an American experimental psychologist, states different perspectives to consider the Trolley dilemma from a psychological aspect, which could be utilized as a good reference for self-driving cars. Joshua claims that “The normative and descriptive Trolley Problems are closely related”[footnoteRef:10]. So based on the theory, any attempt to solve the normative ethical dilemma should begin with the attempt to solve the descriptive problems. The descriptive problem refers to introducing the content of an ethical dilemma without any value judgments. So it requires people to identify the features of actions that elicit their moral approval or disapproval[footnoteRef:11]. At this stage, it’s easy for the public to recognize that during a car accident, any choices faced by autonomous vehicles are morally disapproving, causing the existence of ethical dilemmas. Once such moral disapproval has been identified, the descriptive problem turns toward a normative question. [9: Wilkins, Alasdair. ‘Elon Musk Explains Why Radar Is Future of Tesla’s Autopilot.’ Inverse. Accessed April 22, 2019. ] [10: Greene, Joshua D. ‘Solving the Trolley Problem.’ A Companion to Experimental Philosophy, 2016, 173-89.] [11: Greene, Joshua D. ‘Solving the Trolley Problem.’ A Companion to Experimental Philosophy, 2016, 173-89.]
Different from descriptive problems, normative questions combine people’s value judgments, leading to the choices that the public makes. In this situation, different stakeholders hold various values to pursue their interests. For example, consumers would like to choose cars that have priority to protect the people who are inside the cars; while some insurance companies are more likely to protect people who don’t purchase insurance. There are two general solutions for normative problems. The first solution is that people’s judgments are sensitive reflections of moral values. At this point, people are more likely to choose choices that can maximize their interests, which is similar to the concept of utilitarianism. However, distinctive purposes are pursued by various parties, so it’s unavoidable that this theoretical solution could only satisfy a few groups of people rather than the whole society. Therefore, it turns to another solution, which emphasizes the influence of the personal force factor. During ethical dilemmas, people disapprove of intended harmful actions as a means to achieve the agent’s goal. Therefore, normative solutions, suggests the public remove personal factors to avoid making ethical choices as often as they can.
Except for psychologists, philosophers also provide possible abstract solutions by introducing two distinctive theories, which are consequentialism and deontology. Consequentialism, in ethics, is the doctrine that actions should be judged right or wrong based on their consequences. (Brian Duignan, 2009) So self-driving cars are programmed to make the best decision based on various consequences. The other one is based on deontological ethics which emphasizes the relationship between duty and morality of human actions. So self-driving cars are required to follow the rules. However, as mentioned before, the law is not completely set and cannot avoid the happening of dilemma. So the problem can only be solved at a superficial level rather than a deeper level.
Nevertheless, as self-driving cars are planned to be used in real society in the future, it is also necessary to come up with a few practical solutions other than abstract solutions. Since the appearance of an ethical dilemma is based on the happening of car accidents, one possible way to eliminate ethical dilemmas is by focusing on technical issues. In Chris Urmson and William ‘Red’ Whittaker’s research paper “Self-driving Cars and Urban Challenge,” Boss, a modified 2007 Chevy Tahoe, won the urban challenge using a combination of laser, radar, and GPS data to safely navigate a 53-mile test course among 60 other vehicles (10 autonomous and 50 human-driven)[footnoteRef:12]. To some extent, the success of Boss in the competition reflects a possible desirable future for autonomous technology; conversely, if we combine the rule for the challenge, it would be problematic in some ways. Firstly, the guideline regulates that only vehicles could remain on the course which is midsized or larger. Therefore, some interference factors such as pedestrians and bicycles are removed from the scope. Besides, the stop signs would be the only traffic control on the course, and the location of the corresponding line will be set for the autonomous cars. These regulations eliminate the other important traffic signs such as traffic lights, yield signs, and the ability for self-driving cars to read or detect them from the scope. Even more, the guidelines also allow that the roads a vehicle could drive on would be at least partially defined by Highly accurate GPS waypoints[footnoteRef:13]. Hence, this becomes the key rule that the participants could utilize to reduce the requirement of complexity for self-driving cars. At the same time, it could improve the system’s performance. All of these regulations show that the environment set by the urban challenge is far different from the actual regulations, which point out the existence of many technical issues. [12: C. Urmson and W. ‘. Whittaker, ‘Self-Driving Cars and the Urban Challenge,’ in IEEE Intelligent Systems, vol. 23, no. 2, pp. 66-68, March-April 2008.] [13: C. Urmson and W. ‘. Whittaker, ‘Self-Driving Cars and the Urban Challenge,’ in IEEE Intelligent Systems, vol. 23, no. 2, pp. 66-68, March-April 2008.]
Currently, all the autonomous technologies which allow self-driving cars to react to the environment around them rely on sensors that are too expansive and unwieldy for consumers. So self-driving cars could not react well to the traffic light, and they operate poorly around pedestrians. Other than these normal problems, there is an interesting point on self-driving cars using Boss as an example. As mentioned by Chris Urmson, “One of the key tenets of Boss’s software system is to never give up”[footnoteRef:14]. One specific system used by Boss is its error recovery system, which enables self-driving cars to always attempt some maneuver. The common feature of all error recovery system is it permits autonomous vehicles to attempt risky maneuvers as time progresses and generate a nonrepeating series of motion goals. This error recovery system could be considered the Boss’s key to success during the challenges; while to some degree, it largely increases the risk rate when autonomous technologies are applied to real society. All these technical issues create the possibility of the occurrence of a car accident, leading to an ethical dilemma. Therefore, it’s reasonable to believe that once the possibility of a car accident decline, the ethical dilemma will be less likely to occur. [14: C. Urmson and W. ‘. Whittaker, ‘Self-Driving Cars and the Urban Challenge,’ in IEEE Intelligent Systems, vol. 23, no. 2, pp. 66-68, March-April 2008.]
In research report “Public Perceptions of Self-driving Cars: The Case of Berkeley, California” published by Howard Daniel and Danielle Dai, investigates a group of people and collects data to analyze. They present this case study to inform those creating this technology about how self-driving cars will likely be perceived by the public. It is undeniable that the public attitudes toward self-driving cars become increasingly important as the public shapes the demand and market for the cars, the policies that govern them, and future investments and infrastructure. Therefore, based on the result of the investigation, different solutions are given for two distinctive decision-making of ethical dilemmas. For the first two stages of the car accident, which need to decide the forward-looking responsibilities, people should determine and communicate the amount of control a human has in the context of the self-driving car. Besides, the regulations also need to decide the level of the freedom to make choices; meanwhile, encouraging the active involvement of stakeholders in the process of design and requirements specification. For example, Howard Daniel and Danielle Dai investigate the public envisioning the inclusion of this technology in today’s network. Based on the statistics from the report, “38% believe that self-driving cars should operate with normal traffic, 46% in separate lanes and 11% had no opinion,”[footnoteRef:15] government is required to take some actions such as separating from other modes of transportation through dedicated lanes for autonomous vehicles in some areas. Considering the backward-looking responsibilities, the blank of legislation needs to be fixed by the government to take some actions. More specifically, legislative support and contribution to global frameworks ensure smooth ratification of emerging technology. In addition, car producers should support and collaborate with legislators in their task to keep up-to-date with the current level of automated driving. For autonomous vehicle industry, it needs to include ethics in the overall process of design, development, implementation of self-driving cars, and implementation of ethics training for involved engineers. At last, establishing and maintaining a functioning socio-technological system in addition to functional safety standards would be very significant as well. [15: Howard, Daniel, and Danielle Dai. ‘Public Perceptions of Self-driving Cars: The Case of Berkeley, California.’ August 1, 2013.]
Conclusion
As this new technology is being tested and gradually allowed on the roads under controlled conditions, the focus should be on the practical technological solutions and their social consequences, rather than on idealized unsolvable problems such as the much-discussed trolley problem. What’s more, the border between what is technically possible about what is ethically justifiable exists in real society. So companies and governments need to consider carefully the tradeoff between business needs and ethics. In other words, ethical aspects should be considered in every phase of a software development process by enforcing transparency in those processes. Finally, the public should enable a serious discussion of ethics and should emphasize interests to make sure that the freedom of choice does not disappear in the new era of artificial intelligence technology.
The current self-driving technology is still in the experimental stage, and it faces many problems. However, with the development of technology, many problems will be solved, and driverless cars will become an inevitable trend. Until now, the ethical dilemma of self-driving cars cannot be well solved based on these ideal theories. So from my perspective, as suggested before, people should pay more attention to the actual situation, focusing on the contribution of responsibility at first and trying their best to achieve a consensus among the public. That is the best solution in the present period for the dilemma.
Bibliography
- Carbonell, Jaime G., Ryszard S. Michalski, and Tom M. Mitchell. ‘An Overview of Machine Learning.’ Machine Learning, 1983, 3-23. doi:10.1007/978-3-662-12405-5_1.
- Simon, Herbert A. ‘Why Should Machines Learn?’ Machine Learning, 1983, 25-37. doi:10.1007/978-3-662-12405-5_2.
- Howard, Daniel, and Danielle Dai. ‘Public Perceptions of Self-driving Cars: The Case of Berkeley, California.’ August 1, 2013. https://www.ocf.berkeley.edu/~djhoward/reports/Report – Public Perceptions of Self-Driving Cars.pdf.
- C. Urmson and W. ‘. Whittaker, ‘Self-Driving Cars and the Urban Challenge,’ in IEEE Intelligent Systems, vol. 23, no. 2, pp. 66-68, March-April 2008. doi: 10.1109/MIS.2008.34
- Nyholm, Sven, and Jilles Smids. ‘The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?’ Ethical Theory and Moral Practice19, no. 5 (2016): 1275-289. doi:10.1007/s10677-016-9745-2.
- Greene, Joshua D. ‘Solving the Trolley Problem.’ A Companion to Experimental Philosophy, 2016, 173-89. doi:10.1002/9781118661666.ch11.
- Mladenovic, Milos N., and Tristram Mcpherson. ‘Engineering Social Justice into Traffic Control for Self-Driving Vehicles?’ Science and Engineering Ethics22, no. 4 (2015): 1131-149. doi:10.1007/s11948-015-9690-9.
- Stilgoe, Jack. ‘Machine Learning, Social Learning and the Governance of Self-Driving Cars.’ SSRN Electronic Journal, 2017. doi:10.2139/ssrn.2937316.
- Bimbraw, Keshav. ‘Autonomous Cars: Past, Present, and Future – A Review of the Developments in the Last Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology.’ Proceedings of the 12th International Conference on Informatics in Control, Automation and Robotics, 2015. doi:10.5220/0005540501910198.
- Wilkins, Alasdair. ‘Elon Musk Explains Why Radar Is Future of Tesla’s Autopilot.’ Inverse. Accessed April 22, 2019. https://www.inverse.com/article/20833-elon-musk-radar-autopilot.
- ‘Waymo – Waymo.’ Waymo. Accessed April 22, 2019. https://waymo.com/.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.