Artificial Intelligence: Morality and Ethics

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Abstract

This paper explores three published articles on Ethics and Safety of Artificial Intelligence (AI). These three articles present the main problems and challenges in terms of safety and ethics of AI and solutions for some of them. By presenting us with different scenarios these articles are giving us a better idea of what exactly AI is now and what it is going to be in the future we are given the opportunity to improve our awareness on the mentioned. This paper gives us a brief introduction of what we call Artificial Intelligence and what are some of the safety and ethical concerns scientists and researchers have. In this work some of the problems concerning safety of AI in Utku Köse’s (2018) research are examined and possible solutions are presented as well. The idea of Artificial Morality (Catrin Misselhorn 2018) is presented and some interesting examples are present as well. We see the concept of Super Intelligence and Singularity (Bostrom N., Yudkowsky E. 2014) and its explanation.

Artificial Intelligence

After the chess-playing computer Deep Blue managed to beat the world chess champion Garry Kasparov in 1997 people have been wondering how far machines can go. More than 20 years have passed and the artificial intelligence technology has made a huge progress. We’ve been embracing the AI technology and using it to revolutionize every aspect of our lives and it still has incredible potential. The AI we have today differs from the one in science-fiction movies – extremely intelligent robots trying to destroy the human race. Artificial intelligence does not equate to artificial life. AI refers to a computer that only appears to be intelligent by analyzing data and producing a response. For example computers that can ‘learn’ from mistakes in a limited way. Such technologies might look very intelligent but what people don’t see is that the computer’s “intellect” is limited and much more “artificial” then it seems to be. When we talk about AI there are always certain challenges and problems that need to be overcome. The main one is in terms of safety and ethics , which tends to get more and more serious, is in terms of how are we actually going to implement intelligent robots in society and where is the line between what’s ethical and what’s not.

Artificial ethics and safety

One of the challenges AI has to face is in terms of ethics and safety. AI is eventually going to lead to an industrial revolution by providing fully automatized production of basically everything. There have been industrial revolutions before but this one seems to be different and of a much bigger scale. A lot of people are worried that AI is going to “steal” their jobs by replacing human workers with automatized production. For example if all of the taxi drivers were to be replaced with autonomous vehicles that would mean that those taxi drivers are going to permanently lose their jobs. But on the other hand, if we consider the lower risk of road accidents, self-driving cars seem like an ethical choice. Another problem everyone is talking about is the fear that one day people won’t be able to control its own creation which will lead to an inevitable apocalypse. Should there be a ‘red button’ to stop any intelligent system when its actions start to be dangerous or harmful? How can we develop a red button for preventing intelligent systems from turning to the ‘dark side’? How can we stop an intelligent system to stop us from pressing to the red button when it learns enough about it and its effects? (Utku Köse 2018) The truth is that nowadays a lot of ensuring AI safety based systems are being developed or have been developed already. The main focuses of these systems are the agent models of AI and the widely-used machine learning approach called “Reinforced Learning”. The public opinion is very important and different people might react differently.

Artificial Morality

And here comes one of the biggest problems when it comes to AI – moral choices. For example the robotic vacuum cleaner Roomba. What if during the process of cleaning there is a ladybug or a spider in the way? What is the moral choice to make – kill the insect or let it go or try to chase it if it goes away? This might not seem like a big problem but imagine the following situation – a robot is taking care of disabled or very old person. In this situation the little choices like when to remind the person to take medicine or whether to call the person’s relatives in case of a problem or how long to wait before calling suddenly become extremely important. As the examples show, already a rather simple artificial system like a vacuuming robot faces moral decisions. The more intelligent and autonomous these technologies become, the more intricate the moral problems will become. This raises the need for more research on the topic.

Conclusion

Some AI experts predict that AI will be able to do anything that humans can or even more. This is a questionable assumption, but AI will surely surpass humans in specific domains. A chess computer beating the world chess champion was the first example. As our technology keeps advancing some problems might get solved while other unexpected problems might appear. In order to deal with the current and potential new challenges more development studies should be done in a multidisciplinary manner including researchers from Computer Science, Mathematics, Logic, Philosophy and even social sciences focused on the human, like Sociology or Education. (Utku Köse 2018) The truth is that something so complex cannot and should not be created overnight. The more time we spend trying to prefect the technology the better the outcome shall be.

References

  1. Misselhorn, Catrin. (2018). Artificial Morality. Concepts, Issues and Challenges. Society. 55. 10.1007/s12115-018-0229-y.
  2. Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence . Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020
  3. Köse, U. (2018). Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety7. BRAIN. Broad Research In Artificial Intelligence And Neuroscience, 9(2), pp. 184-197.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!