Artificial Intelligence And Its Social And Ethical Implications

Artificial intelligence (AI) is believed to change the way humans live on this planet. Barr and Feigenbaum (1981) define AI as: “Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behaviour – understanding language, learning, reasoning, solving problems and so on”. A more basic definition of AI is given by Minsky (1968) as “Artifical intelligence is the science of making machines do things that would require intelligence if done by men”

By 2020 the storage capacity (memory) and computational speed (processing) of computers will match that of humans in all aspects and this will start an era of conscious machines (Kurzweil, 1999). Turing (1950) provided a test to measure when machines can be said to have progressed to a stage of human capability – This was when humans could communicate with machines without telling a difference if it was a machine or a human. Turing triage test (Sparrow, 2004) gives a moral test whereby a machine could be proclaimed as morally conscious which could make a decision regarding when for example one of the two patients could be saved and one of the patients is replaced by a conscious machine. Thus AI with all its possibilities also brings a moral dilemma and a challenge which is much more than just technological. In addition, AI will affect the way we live on this planet and the social dynamics.

Research in AI is a hot topic today. However, it has many facets and areas which bring their own challenges both technologically and ethically. These challenges are different according to the form (or no form) in which AI is manifested. According to Sparrow (2004) the moral equivalence of a machine to a human cannot be established unless the machines has a form resembling humans. Lemaignan et al (2017) describes the application of human with robots having Artificial Intelligence built in them. This requires cognition of social aspects and multi-modal processing of multiple inputs as witnessed in human to human interactions. The communication and reciprocation between humans is complex which entails visual signal processing, understanding symbols and gestures, mental real time processing, planning and coordinating, reactive control and recognition of patterns. Lemaignan et al (2017) selected communication through language, contextual meaning of words/phrases and no verbal communication through the eyes i.e social gaze. To implement these objectives Lemaignan et al (2017) had to design the robot to interpret belief symbols, keep and update state of the world around it, keep and iterate plans and execute and check human partners actions in a manner independent of the event. The authors implemented the diverse range of software required to achieve these goals by mimicking the first order semantics of human beings.

Robots can be divided into three distinct categories – Those used to perform tasks in a controlled environment inside, ones to be used in harsh and unpredictable environments outside and humanoid or anthropomorphic robots. Robots designed to work outside have to have the structure and flexibility to move in different types of terrain and thus need AI along with the use of specialised actuators to allow it to move in uncertain and changing environment. Cheetah 3 (Bledt et al., 2018) is one of the most advanced quadruped robots of this category. The robots that require the most extensive use of AI are however humanoid robots. Some examples of humanoid robots built are Honda’s ASIMO (Sakagami et al, 2002), WABOT 2(Kato et al, 1974), Saya (Kobayashi, 2003), HUBO 2 (Oh, 2006) and Hanson Robotocs PKD Android (Hanson, 2006). The obsession with these anthropomorphic robots continues with new and updated models coming up like Kwada and Atlas. Duffy (2002) regards that the human propensity to give human form to inanimate objects (in this case robots) serves to limit the possibilities which could be achieved with robots and AI. This is a complex phenomenon where an intelligence is given a shape and form however once a human shape is given to a robot it is expected to behave in a human like manner and other factors creep in like emotions, personality etc which introduce new challenges rather than developing an intelligence which is just used for its own sake. Lemaignan et al (2017) present a model of a framework of human-robot interaction whereby mutual exchange of information could take place, tasks be achieved collaboratively, and execution of tasks be carried out in a human-aware way. This would entail implementing AI layers for belief systems, apriori common sense and mental models which could conform to human semantics and cognition. These humanoid bring a social and ethical challenge as they are becoming closer in appearance to humans and with AI are getting their unique personalities raising the question if they can be someday equivalent to human beings.

Another area where AI can play a more important role is that of autonomous vehicles. The military application of such vehicles is ideal for future automated warfare which could lead to disastrous results as such automobiles and tanks have the potential to cause huge destruction. However, the civilian uses of such automated vehicles are also immense, and these can bring in huge benefits and solve some of the grave problems we are facing today. Thrun (2006) describes such an autonomous self-driving vehicle that won DARPA grand challenge. This vehicle had the AI built into it allowing it to make decisions dynamically on the basis of sensor data giving the long-term features as well as short duration changes and obstacles on the way. Such autonomous vehicles had applications in space exploration. Also, the traffic problems are worsening every day in the world. Autonomous vehicles equipped with AI not only free the driver but also reduce accidents and increase the efficiency of road usage by packing more vehicles on the same road and using AI to navigate routes. Parking problems would also have a huge effect as AV could reduce the need of car parks as these could be called on to pick up and drop on demand. The AI in the cars along with the ability to make adhoc networks with other road users would make the commutes much more efficient and easier. As the autonomous vehicles make their way to the road these will bring an ethical dilemma along-with (Bonnefon, 2016). This is the algorithm which allows the AV to make decisions. The algorithm could be programmed to save the passengers in the vehicle at all costs or to sacrifice the passengers for a child or a large group of people.

Another area where AI can play a role is the assistive technologies that help people with disabilities to help them perform day to day tasks with ease and service robots which carry out dull and repetitive tasks as chores around the household and child minding. Sharkey (2008) gives the ethical issues with such technologies such as leaving a child to the full care of a robot. The algorithms programmed in the robot should be able to make constrained, rational and ethical decisions which is quite complex and any error or the algorithmic bug could lead to a disaster. The same is true for assistive robots for the elderly.

AI can take place in the form of just a software agent where there are no actuators, physical movements or even a even a piece of hardware we can point at being the centre of intelligence. Such distributed computational systems also called agents in the language of AI act on the basis of previous knowledge, history, observation of current environment and past experience (Poole, 2012). The question of ethics is not only restricted to the growing capability and use of AI in robots. It is also an important question as regards the development of a distributed network intelligence which does not have a shape or form (Poole, 2010)

Efforts to include Ethics in AI Reserach

Assimov (1950) in his science fiction gave these three basic laws to be programmed into any AI capable robot to make it to behave in a non-destructive way all the time. These laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

These laws are very basic and simplistic; however these can provide a ground on which future laws regarding robots or any other form of AI is to be built. Such laws and principles could be hard coded in such intelligent agents which could help prevent a disaster or minimise damage.

Bostrom (2015) makes an assertion that soon artificial intelligence agent can acquire intelligence equivalent to human beings and once that happens it will exponentially increase its own intelligence. This he asserts may be the doom of humans on this planet as there would be no way to stop this super intelligence force multiplier. Some people like Joy (2000) advocate to put a ban on AI research as according to him it would certainly lead to a super intelligence out of control of humans and would inevitably take over the world. Davis (2015) does not agree with Bostrom stating the flaws in his argument as the computational power and memory cannot be equated with intelligence, the increase in intelligence does not necessarily result in a corresponding increase in power, Larger intelligence does not equate to the ability to do more things and the belief that increase in AI would not in parallel be accomplished with giving AI agents ethical grounding. h will surpass us and result in a superintelligence which will take over the world. This seems unlikely to happens at the moment. However, the safe thing to do would be to come up with an ethical framework on which AI research should be carried out and have these ethical safeguards hard coded in intelligent agents. A High Level Expert Group on Artificial Intelligence (AI HLEG) was set up by the European commission in 2018 who prepared the Ethics Guidelines for Trustworthy Artificial Intelligence. This report (HLEG, 2018) uses the term trustworthy AI which means that the AI is ethical and technically robust. The guidance gives the framework for development of AI on the basis of human centric model and the use of this technology to alleviate the sufferings of the people rather than creating a technological showpiece.

A brief overview of Artificial Intelligence developments in the recent years has been presented. It is clear that this is a very rich and active area of research which offers tremendous opportunities for the future. Job markets would change considerably with the development of AI as automated assistants would carry out most of the tasks in the software and robots carrying out the physical task using actuators. This is certainly going to happen very soon. Also, the areas requiring low level technical skills would also suffer such as computer coders and technician jobs. Social sciences and soft skills would be more in demand for the future. Another future perspective is looking at AI and the thought that somehow, we will be able to create an intelligence which will surpass us and result in a superintelligence which will take over the world. This seems unlikely to happens at the moment. However, the safe thing to do would be to come to a ethical framework on which AI research should be carried out and have these ethical safeguards hard coded in intelligent agents.

In the light of the arguments it is safe to say that AI will play the most vital role not only in technological developments but also in the social, economic and political spheres and change the way humans live on this planet in a big way. Risks are there for it to go out of hand but the realisation of this is already there and necessary safeguards are being devised to avoid any pitfalls which may result in disastrous consequences.

References

  1. AI HLEG. 2018. Ethics guidelines for trustworthy AI
  2. Assimov, I 1950. I, Robot. Doubleday, Garden City, New York.
  3. Barr, A. and Feigenbaum, E., 1981. The Handbook of Artificial Intelligence Vol. I. Pitman.
  4. Bledt, G., Powell, M.J., Katz, B., Di Carlo, J., Wensing, P.M. and Kim, S., 2018, October. MIT Cheetah 3: Design and control of a robust, dynamic quadruped robot. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2245-2252). IEEE.
  5. Bonnefon, J.F., Shariff, A. and Rahwan, I., 2016. The social dilemma of autonomous vehicles. Science, 352(6293), pp.1573-1576.
  6. Duffy, B.R., 2003. Anthropomorphism and the social robot. Robotics and autonomous systems, 42(3-4), pp.177-190.
  7. Davis, E., 2015. Ethical guidelines for a superintelligence. Artificial Intelligence, 220, pp.121-124.
  8. Hanson, D., 2006, July. Exploring the aesthetic range for humanoid robots. In Proceedings of the ICCS/CogSci-2006 long symposium: Toward social mechanisms of android science (pp. 39-42). Citeseer.
  9. Harle, R., 1999. Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence. SOPHIA-MELBOURNE-, 38, pp.158-160.
  10. Joy, B. 2000. Why the future does not need us. Wired. Available at https://www.wired.com/2000/04/joy-2/
  11. Kobayashi, H., Ichikawa, Y., Senda, M. and Shiiba, T., 2003, October. Realization of realistic and rich facial expressions by face robot. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453) (Vol. 2, pp. 1123-1128). IEEE.
  12. Kato, I., Ohteru, S., Kobayashi, H., Shirai, K. and Uchiyama, A., 1974. Information-power machine with senses and limbs. In On theory and practice of robots and manipulators (pp. 11-24). Springer, Vienna.
  13. Lemaignan, S., Warnier, M., Sisbot, E.A., Clodic, A. and Alami, R., 2017. Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence, 247, pp.45-69.
  14. Minsky, M., 1968, Semantic information processing. Cambridge, Mass.
  15. Oh, J.H., Hanson, D., Kim, W.S., Han, Y., Kim, J.Y. and Park, I.W., 2006, October. Design of android type humanoid robot Albert HUBO. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1428-1433). IEEE.
  16. Poole, D.L. and Mackworth, A.K., 2010. Artificial Intelligence: foundations of computational agents. Cambridge University Press.
  17. Sharkey, N., 2008. The ethical frontiers of robotics. Science, 322(5909), pp.1800-1801.
  18. Sparrow, R., 2004. The turing triage test. Ethics and Information Technology, 6(4), pp.203-213.
  19. Sakagami, Y., Watanabe, R., Aoyama, C., Matsunaga, S., Higaki, N. and Fujimura, K., 2002. The intelligent ASIMO: System overview and integration. In IEEE/RSJ international conference on intelligent robots and systems (Vol. 3, pp. 2478-2483). IEEE.
  20. Turing, A.M., 2004. Computing machinery and intelligence (1950). The Essential Turing: The Ideas that Gave Birth to the Computer Age. Ed. B. Jack Copeland. Oxford: Oxford UP, pp.433-64.
  21. Thorn, P.D., 2015. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies.
  22. Thrun, S., 2006, September. Winning the darpa grand challenge: A robot race through the mojave desert. In 21st IEEE/ACM International Conference on Automated Software Engineering (ASE’06) (pp. 11-11). IEEE.

Artificial Intelligence Movie Reflection Essay

Introduction

Nowadays, artificial intelligence (AI) is present almost everywhere and helps us daily, for example in self-driving cars, in virtual assistants such as the well-known Siri or Google Home, or even in the film industry. However, when asking some of my close friends, family, and IB students from around the world if they trusted AI, the most common answer was that they did not; but those same people also had trouble correctly defining what is AI. So first of all, we should know what is AI and when was it first introduced. Following the Oxford Dictionary definition of artificial intelligence, it is “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages” (2018). In other words, artificial intelligence is the ability of a computer system to perform tasks that are usually performed by humans. Contrary to popular belief, AI is not new and was first introduced in 1950 as explained by Aggarwal A. (2018). Through this first misconception, we already understand that there is a degree of ignorance about artificial intelligence. The fact that there is some ignorance means that there is a lack of communication about what is AI exactly. We can, therefore, question one of the largest media that showed and made use of AI for quite a long time: Hollywood movies. Hollywood movies are seen by people from all around the world meaning that they might have a great influence on these viewers. A study performed in 2015 by Pautz M. even suggested that movies act as an influence, particularly towards the young public. And following this, we can ask ourselves: to what extent do Hollywood movies play on the ignorance of people to create an inaccurate picture of AI?

To answer my question, I looked at movies from 1968 to today to see how this degree of ignorance changed through time and then be able to understand how the Hollywood film industry changed its representation of AI. An interesting thought could also be to look and the effect of this representation of AI on American society. These movies include “2001: Space Odyssey” (1968), “Colossus: The Forbin Project” (1970), “Blade Runner” (1982), “Matrix” (1999), “I, Robot” (2004), “A.I. Artificial Intelligence” (2001) and “Her” (2013). This research question is worthy of investigation as it will clarify some of the misconceptions about artificial intelligence including what it is capable of or what it is used for. It is important to clarify these misconceptions as artificial intelligence is a fairly new field of study that is growing and developing quite fast and people seem to have trouble catching up on news about this subject. It will also help us understand how the degree of ignorance about artificial intelligence changed through time, the effect it had on American society, and the influence Hollywood movies have on shaping people’s perception about a particular subject, here, artificial intelligence.

Development

When ignorance reigned:

A time when people weren’t informed

When artificial intelligence was first introduced in 1950, researchers were being very optimistic about what could be done with it. Five years later, the “first hype cycle” began as stated by Dr. Alok Aggarwal (2018) it was a period that lasted until 1983 during which the term “artificial intelligence” was introduced for the very first time. This “hype cycle” marked a long period of enthusiasm towards AI which caused great inventions such as machine learning which is still used and studied nowadays. AI was something very new and researchers soon began making very “audacious claims” about how artificial intelligence would reach human intelligence in “no more than a generation” as Dr. Agarwal explains. Some claims came out to be true but much later than expected such as Allen Newell’s claim citing that “within ten years a digital computer will be the world’s chess champion,” (1958). Others were a bit too optimistic such as Marvin Minsky claiming that “within our lifetime machines may surpass us in general intelligence,” (1961).

All this hype around artificial intelligence even caught Hollywood’s attention and for the first time in 1968, Arthur Clarke and Stanley Kubrick realized the famous “2001: A Space Odyssey” movie which, at first sight, seems to picture AI in a very positive way; in the form of a machine, HAL 9000, capable of human intelligence and even humor, whose purpose is to help a crew to manage a spaceship. However, it soon turns out the machine tries to kill the entire crew only because they asked it questions it wasn’t allowed to answer which occurred to be a clear mistake in the way it was programmed.

All of these events pictured artificial intelligence in a very negative way. Moreover, we can see that the representation of the 2000s Hollywood presented back in 1968 is not accurate. In 2001 we were still quite far from having computers capable of humor or helping humans the way HAL 9000 was.

At the time, AI was something very new; not everyone knew about it and the few things people knew were coming from movies such as 2001: A Space Odyssey. Some of the most common ways for people not studying artificial intelligence to learn about artificial intelligence were mainly through newspapers or movies. This means people not interested in artificial intelligence had quite limited knowledge about how it functioned. A Space Odyssey. To understand how people viewed artificial intelligence, we should analyze each movie and detect what was possible and what was fiction. This will allow us to see how significant the degree of ignorance about artificial intelligence was in American society between the 1960s and 1990s.

Fictional AIs and real AIs

I will start each movie analysis with a summary explaining the plot of the story and how the AI works. It will then be compared with real-life AIs to conclude whether what is represented in the movie is possible or not. Only movies between 1968 and 1999 will be analyzed.

    • 2001: A Space Odyssey (1968)

As explained earlier, in 2001: A Space Odyssey, an artificially intelligent computer, HAL 9000, capable of intelligence, humor, and even creativity is used to help a spaceship crew.

However, after a crew member asks it a question that it is not allowed to answer, HAL judges it is better to kill the entire crew rather than lie to them. However, HAL is finally deactivated by the last surviving crew member. The whole story is set in 2001.

We can see at first sight that researchers and even filmmakers were very optimistic as they thought such a computer would be invented by the year 2000. However, even though a computer such as HAL did not exist in 2000, it partly does nowadays; excluding the humoristic and in a way which doesn’t allow for irrational decisions such as the one taken by HAL to kill the crew.

As said by Professor Gelernter D. (2017) “A machine must understand the full range and nuance of human emotion before it can be deemed capable of creative thought.”

Yet, a creative AI already exists under the acronym AIVA (Artificial Intelligence Virtual Artist) and is used to create original musical pieces. Moreover, the fact that a computer such as HAL already partly exists shows that 40 years ago, filmmakers were being pretty realistic in their representation of AI.

    • Colossus: The Forbin Project (1970)

In this movie, an American artificially intelligent supercomputer, Colossus, is created to prevent nuclear war. However, Colossus soon discovers it has a counterpart created by the Soviet Union called Guardian. Both computers soon begin threatening humans to launch nukes if they do not obey their orders.

Finally, Colossus takes over the world saying it will accomplish what it was designed to do: prevent nuclear wars.

What the AI is capable of in this movie is quite similar to what modern AIs are capable of. However as said by Shultz D. (2015) “the idea that a computer operating on punch cards would have enough computational power to outwit and subjugate humanity” is wrong, meaning it would be very unlikely that a computer such as Colossus would have enough memory and power to operate as Colossus does. However, the logical functioning of the supercomputers in this movie is pretty close to how an AI works. It will perform what it is designed to do, here, prevent nuclear war, and it does so by taking over the world. This shows once again that filmmakers were representing artificial intelligence in a very realistic way.

    • Blade Runner (1982)

In the future, humans have found a way to create organic life that exactly resembles humans called “replicants”. These replicants however only live for 4 years and aren’t allowed on Earth due to conflicts humans had with them in the past.

The major points this movie gets wrong are the fact that we aren’t capable of creating organic life which looks just like us humans and the fact that we cannot implement artificial intelligence into humans. The representation of AI in this movie is mostly negative as the whole story runs around a “blade runner” whose mission is to kill 4 “replicants” who came on Earth.

    • Matrix (1999)

In this movie, machines have already eliminated mankind and are using them as a source of power to function. The AIs have created a software called the “Matrix” which is a simulation of the real world in which all the humans who are harvested by the machines for their energy find themselves. However, without knowing it, they think they are in the real world.

It is hotly debated whether we live in a simulation or not. As Tegmark M. says, “Is it logically possible that we are in a simulation? Yes. Are we probably in a simulation? I would say no” (2016).

However, the scenario where AI takes over the Earth and, itself, creates a simulation is very unlikely (this misconception will be discussed in section 3.1.4.) In this movie, the view of artificial intelligence is negative from beginning to end just like in Blade Runner.

Overall, the views about artificial intelligence between 1968 and 1999 in Hollywood movies were quite negative, however apart from Blade Runner and The Matrix, the representation of AI at the time was very realistic meaning that, even if the degree of ignorance of people was quite high, movies were trying to picture artificial intelligence as accurately as possible to show people what AIs were capable of. Finally, this shows that Hollywood filmmakers were aware that people weren’t very informed about artificial intelligence and therefore used their films as ways to communicate about artificial intelligence in addition to the entertainment aspect. This is especially true as researcher Marvin Minsky was “used as an adviser” for the movie 2001: A Space Odyssey as explained by Aggarwal A.

Moreover, as artificial intelligence was something new and not many people knew about it, Hollywood filmmakers were free to create whatever story about AI they wanted to and they might wanted to be accurate in the way they represented AI at the time to “educate” people about this topic. Furthermore, we will see that through time, the representation of AI becomes less and less accurate which emphasizes this point as people started to know more and more about this topic.

A negative impact on society

As people didn’t know much about artificial intelligence at this time, their knowledge was quite limited and even if the representation of AI in movies was quite realistic, people weren’t necessarily aware of that. Moreover, even if the picture of AI reflected by movies was realistic, it was often pushed to extreme cases which, to this date, still haven’t occurred, the most obvious reason being to entertain viewers. Most AIs at the time and today aren’t used in the way they were in movies such as Colossus. In addition to that was the fact that most of these representations of AI were negative, even in other movies of the same time such as “Terminator” (1985). The problem is that the brain has a greater sensitivity to negative thoughts as explained by Murano H.E. (2016) which means that, during this period, people who mostly learn about artificial intelligence in Hollywood movies would have stronger memories about artificial intelligence as being negative which effectively distorted the views about AI, making them more negative.

Furthermore, it is widely believed that people tend to fear what they don’t understand or know. And, as it was said, people didn’t know much about artificial intelligence and we showed that the degree of ignorance about AI was quite important in the American society which emphasizes the inaccurate picture people had of artificial intelligence between the 1960s and 1990s. Nevertheless, even if American people had an inaccurate and negative picture of AI, the one transmitted by Hollywood movies at the time wasn’t necessarily inaccurate. All of these negative impacts might have led to people having a bad picture of artificial intelligence which would explain the mistrust people had in AI or machines in general and still have today. Hollywood films might have also contributed to all the misconceptions people had, misconceptions that are still, nowadays, anchored in the American culture and other cultures.

A misconception of A.I.

As explained above, all those negative thoughts and views about artificial intelligence could have led some people to create inaccurate pictures of it, eventually leading them to believe in misconceptions such as the fact that an AI can take over the world just like this.

Those misconceptions also include the fact that artificial intelligence could dominate the planet just like the one in Colossus: The Forbin Project (1970). However, Hollywood movies are fiction/stories and even though what the AIs are doing in this movie is logically and scientifically possible as discussed earlier, we still haven’t created an artificial intelligence equipped with the same level of intellect as those presented in Hollywood films between the 1960s and the 2000s.

Moreover, even though the representation of AIs at the time was pretty accurate scientifically, the representation of it in movies was negative when it was mostly used in positive ways. For this reason, people tended and still tend to think that artificial intelligence can be compared to something that will cause the human race to become extinct as Stephen Hawking or Elon Musk believe. However as explained by Tegmark M. (2017), every precaution is taken when experimenting with artificial intelligence.

And as AI started creeping into people’s daily lives, people understood it better and better and it soon became something less unknown and more appreciated as a tool to help us.

Artificial Intelligence: Reasoning About An Ethical Issue

Artificial Intelligence is the simulation of human intelligence through the use of computer software. The main processes of intelligence that it focuses on include: the ability to learn, the ability to evolve, and the ability to reason. A key distinction is that while Artificial Intelligence may perform these functions the programs are still not sentient. A sentient being must have or develop the capacity to feel, perceive, or experience subjectivity. Even without sentience, a machine with these capabilities is extremely vital to the technology of the future.

Before humanity could program a machine to think logically, the concept of logic had to be developed. Greek philosophers Plato, Aristotle, and Socrates laid the foundation for modern thought and developed the system of using algorithms to come to a conclusion. After thousands of years of development, logic can now be conceived as a systematic process of calculations, probabilities, and predicted outcomes. The idea of a programmable machine has been around for over one hundred years. Joseph Marie Jacquard invented a loom in 1805 that could be programmed using punch cards. The idea that a machine could be programmed to think for itself, however, is a relatively new idea with effects that are now seen in everyday life. Artificial Intelligence is the simulation of human intelligence through the use of computer software. The main processes of intelligence that it focuses on include: the ability to learn, the ability to evolve, and the ability to reason.

In 1955, John McCarthy, Alen Newell, Herbert Simon, and many other leading technology researchers held a conference at Dartmouth College for the sole purpose of discussing Artificial Intelligence’s development. Artificial Intelligence as a scientific field was developed and named at this conference in 1955. This conference was the beginning of a new, exciting, and innovative era in computing and logic development.

The early stages of Artificial Intelligence development were met with many challenges and drawbacks. In order to gain an accurate model of logic and how humans make decisions, processes and systems from engineering, biology, experimental psychology, communication theory, game theory, mathematics, statistics, logic, philosophy, and linguistics must be understood and connected to one another. In addition to the need to understand the fundamentals behind human logic, early programs were necessarily limited in scope, size, and the speed of memory and processors. Early rudimentary research on Artificial Intelligence began in 1955 with a computer called LT (Logic Theorist), which was capable of seeking and finding proofs of theorems by a process of heuristic, or selective, search. Although the program could derive proofs, its uses were not applicable (or even useful) to the scientific community. Following the Logic Theorist came the General Problem Solver which, unlike Logic Theorist, was designed from the start to imitate human problem-solving protocols. The General Problem solver approached problems by separating the end goal into a series of subgoals, which made it the first machine to approach a problem using a human’s approach.

Applications of Artificial Intelligence can already be found in our daily lives; one of the most important uses is in the internet. Search engines use Artificial Intelligence to decide which websites and photos to display. Artificial Intelligence also plays a major role in entertainment through its use in Hollywood animations, Video games, and Special Effects.

Artificial Intelligence’s use in science has greatly improved the ability to model millions of different variables. Using this, psychologists and neuroscientists have developed powerful theories on the mind. These include a model of how the physical brain works. Artificial Intelligence is also used in biology in the form of “Artificial Life”, which develops computer models of different aspects of living organisms and can use multi-varied experimentation an unlimited number of times.

Artificial Intelligence is also used in industry. One example of industrial usage of Artificial Intelligence is a welding robot. These welding robots can perform lengthy and complex tasks in a shorter amount of time than manual labor. These robots can also be used for particular parts that are produced in large quantities. Instead of having many men do a job that takes up a substantial amount of their time, these robots can do the job much more quickly and the men can work on smaller more specialized products. Using these robots can also reduce the overall costs for businesses, by reducing the hours of manual labor spent on the jobs that can be automated. Other Applications of Artificial Intelligence can already be found in our daily lives; one of the most important uses is in the internet. Search engines use Artificial Intelligence to decide which websites and photos to display.

Even with all these innovative uses, many people are still worried about Artificial Intelligence. The worries can be attributed the threats and dangers of it. There are two main kinds of threatening Artificial Intelligence. The first is Artificial Intelligence that is programmed to do harm; an example of this is autonomous drones and missiles. Because they operate on a computer and are considered a machine, many people are scared that the programs will lack the emotion of a human, and that they may make a mistake through its objectivity.

The second kind of threat stems from an artificially intelligent benevolent application that develops a destructive method for achieving its goal. Because (as aforementioned) Artificial Intelligence is a machine, it is extremely objective and will always choose the most efficient method to achieve its programmed goal. These goals can create an issue if the best way for Artificial Intelligence to reach its goal is to follow a course of action that involves steps that are not aligned with what humanity wants. An example of this would be if someone programmed Artificial Intelligence to stop climate change, and the system calculated that the best course of action is to kill all humans. Because of all these risks, the question is posed of how to handle and contain them. The first and simple choice is for the government to impose legislation enforcing strict guidelines on the use of Artificial Intelligence.

Artificial Intelligence poses a host of unanswered ethical questions. The most challenging are: Could a computer simulate an animal or human brain in a way that the simulation should receive the same animal rights or human rights as the actual creature? And can a computer that must be considered sentient ever be turned off?

Because of these ethically challenging questions, many groups, (including but not limited to): Algorithmic Justice League, A.I. Ethics Lab, and Open A.I., are all fighting to not only keep humans safe from A.I. but to also keep A.I. safe from humans, because the groups are scared that over time, human’s will corrupt A.I. and use it for harmful purposes.

To keep humanity safe from Artificial Intelligence these groups are fighting for harder regulation on Artificial Intelligence. The regulation that should be imposed on Artificial Intelligence must have clear guidelines on what uses of A.I. are acceptable and what are not. Before this can be done, the government must create an insight committee so that all members of the government can gain a better understanding of the strengths and weaknesses of Artificial Intelligence. After developing insight in Artificial Intelligence, the next logical step would be to enact an oversight committee that would only handle laws surrounding Artificial Intelligence. When analyzing the issue with the frame of mind of Emmanuel Kant (utilitarianism), it becomes clear that he would want dangerous applications of Artificial Intelligence to be illegal. Following this guideline for the regulation of Artificial Intelligence would allow for the majority of people to obtain an easier life (without being exposed to the dangers) because of all the ways A.I. can help them.

Bibliography

  1. “Artificial Intelligence.” ScienceDaily. ScienceDaily. Accessed November 9, 2019. https://www.sciencedaily.com/terms/artificial_intelligence.htm.
  2. Catalog, Menu SLS |Course. “Regulating Artificial Intelligence.” Stanford Law School. Accessed November 9, 2019. https://law.stanford.edu/courses/regulating-artificial-intelligence/.
  3. Erde ́lyi Olivia, and Judy Goldsmith. “Regulating Artificial Intelligence Proposal for a Global Solution.” Regulating Artificial Intelligence Proposal for a Global Solution. Accessed November 9, 2019. https://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_13.pdf.
  4. Etzioni, Oren. “How to Regulate Artificial Intelligence.” The New York Times. The New York Times, September 2, 2017. https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html.
  5. Mihajlovic, Ilija. “How Artificial Intelligence Is Impacting Our Everyday Lives.” Medium. Towards Data Science, October 13, 2019. https://towardsdatascience.com/how-artificial-intelligence-is-impacting-our-everyday-lives-eae3b63379e1.
  6. Narula, Gautam. “Everyday Examples of Artificial Intelligence and Machine Learning.” Emerj. Emerj, October 23, 2019. https://emerj.com/ai-sector-overviews/everyday-examples-of-ai/.

Artificial Intelligence: Risk Or Good For Humanity?

In this modern era, we are living in the world that full of machines and depend on it in every field of our life. Even the routine that we do inside the house also requires technology. Technology is inevitable in our life and some of them had been implied their own ‘mind’ known as artificial intelligence (AI). AI is a computer system that had been program to react like a human being. Our daily equipment such as handphone, cameras and air-conditioner are the example of technology where AI had been applied into them. However, technology developers had push AI ability by implement it in manufacturer factory, self-driving car and others without realizing the threat that AI pose to humanity. The evolution pace of AI occur quickly and human should be aware and be prepare with the aftermath of AI development. Today AI might be the one that help us in our daily life but there is possible that in future the humanity will suffer from this intelligence that will imply in technology.

Innovators themselves aware the threats that AI have. Tesla and SpaceX leader and innovator, Elon Musk who also actively develop artificial intelligence for his company stated that AI potentially be very dangerous.(Marr, 2018) AI created by human and been coded to achieve the goal that had been given. This is where the threat comes. AI does not have conscious mind and it will do anything to achieve the goal that had been set in their coding. There is a case where narrow AI that had been implied in chess game cheated and also change the score to win the game. This shows that AI would not hesitate to take bad steps or defy the instructions they supposed to follow to achieve the goal. Current developer of AI wants to create general AI that perform most of the task that human can. Imagine this general AI will defy the codes implement and will cause chaos to humanity because the only thing in their virtual minds is achieve goal that had been set. AI and human posses common aim which is to fulfill goals but AI would not hesitate to disobey their codes to achieve it and this characteristic would give undesirable effect to humanity.

The creation of AI also will give huge impacts towards labor industries especially repetitive jobs in the factory. AI will replace the current workers as AI does not have physical body like human that requires food to gain energy. This technology can work 24 hours a week without non-stop. Big company might favor AI compare to human but they do not realize that the threat that AI can create in work industries. The workers that had been replaced by the AI would not have source of income and will take at least months to be employed in new sectors. Application of AI in working section itself is a double edge sword towards humanity. Yes, you may have a machine that can perform the task without feel tired but does the machines provide high quality service to the factory as human do. Pro-AI activist might say that AI will create jobs and it is true but it does take time and training for the current workers to adapt with AI working environment and this will create troublesome towards the quality or services of the work itself.

Microsoft co-founder, Bill Gates also believe there is reason for us to cautious about AI. He also said the good or benefit of AI can outweigh the bad if we managed the AI properly. (Marr, 2018) Artificial intelligence could help us reduce error and increase accuracy with high degree of precision in various situation. In medical field, AI can help the doctor or physician to identify risk factor of the patient via health care device with help of AI. In current medical industry, Radiosurgery is an example of application of AI for treatment. It helps to operate tumor or cancer cell without affecting the surrounding tissue.

Human can gain lot of benefits with AI if we able to manage it properly. Despite all the benefit that we gain, we must aware the threat or risk of this artificial intelligence can do towards the humanity. We acknowledge and accept that this ‘smart’ technology plays important roles in the future but depending on a machine for almost every task is just a step backward for humanity. Human must adapt and have fully control towards AI to avoid the threat and risk so that humanity can evolves and become superior.

REFERENCES

  1. Marr, B. (2018, November 19). Is Artificial Intelligence Dangerous? 6 AI Risks Everyone Should Know About. Retrieved from https://www.forbes.com/

Why AI is Dangerous to Humanity

Artificial Intelligence (AI) is the hypothesis and advancement of computer frameworks able to perform assignments regularly requiring human insights, such as visual discernment, discourse acknowledgment, decision-making, and interpretation between dialects. There is already some artificial intelligence in the world like Siri, Alexa, Tesla, Cogito, Boxever and many others. In the rest of our essay, we will show that because of his bad impact on the digital and physical level, Artificial Intelligence is dangerous to humanity.

Artificial Intelligence affects negatively the digital systems through phishing schemes. Digital phishing or computerized phishing is characterized as a shape of social designing that employments e-mail or pernicious websites (among other channels) to request individual data from a person or company by posturing as a reliable organization or substance. WWW.iZOOLOGIC.COM states that this AI framework can methodically create URLs for webpages that posture as authentic login pages for authentic websites, thus, collecting client qualifications for account capturing afterward. Artificial Intelligence can generate due to its different capacities many passwords and usernames to enter inside many websites or private accounts. Imagine how it will be dangerous for many companies if the AI revealed their password and then put some private and dangerous information in the hands of malicious people. More than contributing to automat phishing, it allows hacking.

On the other hand, AI allows for faster digital subversion through hacking. Cole affirms AI permits programmers to dodge conventional security fastly conjointly makes current security more proficient. In reality, due to its capacities, it makes a difference programmers to proficiently outperform conventional security in record time but can contribute to making strides security. Imagine, how AI could help hackers to get around the security of many banks and then allow them to reroute money. AI does not affect only the digital but it affects also the physical plan.

Artificial Intelligence impacts negatively on the physical plane. On the one hand, it can automate terrorism. Brundage asserts that he fears that independent rambles either utilize to alarm individuals and mechanized cyberattacks by both hoodlums and state bunches. In fact, in the wrong hands, autonomous drones furnished of AI can be used for terrorist purposes. Imagine the extent of what will terrorism reach if the terrorists can use AI to bombard or kill people. More than contributing to automat terrorism, AI can choose personally to attack people.

Then again, the problem of remoting attack that can happen if the AI learns too much and decide to attack people. Gaspi states such assaults, which appear like science fiction nowadays, might be gotten to be a reality within the another few years. In fact, due to its capacities of learning and comprehension, it will decide one day to attack people because of some reasons known by itself only and we will not have a way to remote its attack because it will control everything. Imagine how it will be horrific to be attacked by an AI and do not how to deactivate it because it will hide perfectly the different access.

Even though helping the harmful aspect of AI to humanity is the right role, fighters criticize that function based totally on two specific arguments. Initially, combatants attest that AI will reduce the rate of error and will help us accomplishing accuracy with a greater diploma of precision. moreover, they opine that AI will offer assistance to us in our ordinary applications and assignments. Disputing the first counterargument, SKYNETTODAY affirms that AI will create jobless due to its performance in the activity. shifting ahead with the dismissal of the second one counter, Tverdohleb states that AI will make us lazy and then we can take a look at a boom in weight problems in society. After revealing the weaknesses of the counterarguments, we will attest that AI is dangerous to humanity.

To summarize, we can notice that Artificial Intelligence because on the one hands its negative impact on the digital through the automatization of phishing and the faster hacking and on the other hand its impact on the physical plan through the automatization of the terrorism and the problem of remoting attack. In my opinion, due to these different pieces of evidence, I think that Artificial Intelligence is dangerous to humanity.

Artificial Intelligence: Morality and Ethics

Abstract

This paper explores three published articles on Ethics and Safety of Artificial Intelligence (AI). These three articles present the main problems and challenges in terms of safety and ethics of AI and solutions for some of them. By presenting us with different scenarios these articles are giving us a better idea of what exactly AI is now and what it is going to be in the future we are given the opportunity to improve our awareness on the mentioned. This paper gives us a brief introduction of what we call Artificial Intelligence and what are some of the safety and ethical concerns scientists and researchers have. In this work some of the problems concerning safety of AI in Utku Köse’s (2018) research are examined and possible solutions are presented as well. The idea of Artificial Morality (Catrin Misselhorn 2018) is presented and some interesting examples are present as well. We see the concept of Super Intelligence and Singularity (Bostrom N., Yudkowsky E. 2014) and its explanation.

Artificial Intelligence

After the chess-playing computer Deep Blue managed to beat the world chess champion Garry Kasparov in 1997 people have been wondering how far machines can go. More than 20 years have passed and the artificial intelligence technology has made a huge progress. We’ve been embracing the AI technology and using it to revolutionize every aspect of our lives and it still has incredible potential. The AI we have today differs from the one in science-fiction movies – extremely intelligent robots trying to destroy the human race. Artificial intelligence does not equate to artificial life. AI refers to a computer that only appears to be intelligent by analyzing data and producing a response. For example computers that can ‘learn’ from mistakes in a limited way. Such technologies might look very intelligent but what people don’t see is that the computer’s “intellect” is limited and much more “artificial” then it seems to be. When we talk about AI there are always certain challenges and problems that need to be overcome. The main one is in terms of safety and ethics , which tends to get more and more serious, is in terms of how are we actually going to implement intelligent robots in society and where is the line between what’s ethical and what’s not.

Artificial ethics and safety

One of the challenges AI has to face is in terms of ethics and safety. AI is eventually going to lead to an industrial revolution by providing fully automatized production of basically everything. There have been industrial revolutions before but this one seems to be different and of a much bigger scale. A lot of people are worried that AI is going to “steal” their jobs by replacing human workers with automatized production. For example if all of the taxi drivers were to be replaced with autonomous vehicles that would mean that those taxi drivers are going to permanently lose their jobs. But on the other hand, if we consider the lower risk of road accidents, self-driving cars seem like an ethical choice. Another problem everyone is talking about is the fear that one day people won’t be able to control its own creation which will lead to an inevitable apocalypse. Should there be a ‘red button’ to stop any intelligent system when its actions start to be dangerous or harmful? How can we develop a red button for preventing intelligent systems from turning to the ‘dark side’? How can we stop an intelligent system to stop us from pressing to the red button when it learns enough about it and its effects? (Utku Köse 2018) The truth is that nowadays a lot of ensuring AI safety based systems are being developed or have been developed already. The main focuses of these systems are the agent models of AI and the widely-used machine learning approach called “Reinforced Learning”. The public opinion is very important and different people might react differently.

Artificial Morality

And here comes one of the biggest problems when it comes to AI – moral choices. For example the robotic vacuum cleaner Roomba. What if during the process of cleaning there is a ladybug or a spider in the way? What is the moral choice to make – kill the insect or let it go or try to chase it if it goes away? This might not seem like a big problem but imagine the following situation – a robot is taking care of disabled or very old person. In this situation the little choices like when to remind the person to take medicine or whether to call the person’s relatives in case of a problem or how long to wait before calling suddenly become extremely important. As the examples show, already a rather simple artificial system like a vacuuming robot faces moral decisions. The more intelligent and autonomous these technologies become, the more intricate the moral problems will become. This raises the need for more research on the topic.

Conclusion

Some AI experts predict that AI will be able to do anything that humans can or even more. This is a questionable assumption, but AI will surely surpass humans in specific domains. A chess computer beating the world chess champion was the first example. As our technology keeps advancing some problems might get solved while other unexpected problems might appear. In order to deal with the current and potential new challenges more development studies should be done in a multidisciplinary manner including researchers from Computer Science, Mathematics, Logic, Philosophy and even social sciences focused on the human, like Sociology or Education. (Utku Köse 2018) The truth is that something so complex cannot and should not be created overnight. The more time we spend trying to prefect the technology the better the outcome shall be.

References

  1. Misselhorn, Catrin. (2018). Artificial Morality. Concepts, Issues and Challenges. Society. 55. 10.1007/s12115-018-0229-y.
  2. Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence . Cambridge: Cambridge University Press. doi:10.1017/CBO9781139046855.020
  3. Köse, U. (2018). Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety7. BRAIN. Broad Research In Artificial Intelligence And Neuroscience, 9(2), pp. 184-197.

Should Artificial Intelligence be Considered a Potential Threat to Humanity?

From washing machines to Siri, we live surrounded by technology, Artificial Intelligence (AI) is no longer science fiction. According to Techopedia, AI is “an area of computer science that emphasizes the creation of intelligent machines that work and react like humans”. Not every technology is artificial intelligence but every artificial intelligence is technology. Although this seems as a breathtaking idea for developing futuristic technologies it can eventually backlash against humanity, which is understood by the Merriam-Webster dictionary, as the the totality of human beings. The idea of potential threat could be understood as the human decision of using it to our favour or against us, especially in issues such as science, technology or even economy.

Everyday technology is updated, it grows faster than any one can imagine as it counts with an exponential rate of improvement, just as Ray Kurzweil states in an essay wrote by him on 2001: “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate)”. As AI grows, so does science as they are directly connected with each other. This resembles how our lives are improved by AI. For example, it has been proved by an article published in Science Daily that robots can detect breast cancer just as well as radiologists. Breast cancer counts with approximately 500,000 annual deaths worldwide this disease can be reduced in a considerable amount with effective mammography. With the use of AI, the amount of screening that can be done is a lot more as the process is intense and long lasting for radiologists alone, taking artificial intelligence as a helping hand would increase the number and effectiveness of mammograms leading to an increase in early detection and so a reduction of mortalities due to breast cancer.

Another use of this technology in medicine is also for the detection of neurodegenerative diseases, including Alzheimer. As experimented in a study made at the Icahn School of Medicine, published in the Nature medical journal Laboratory Investigation and rewritten on a Science Daily article, “Applying deep learning, these images were used to create a convolutional neural network capable of identifying neurofibrillary tangles with a high degree of accuracy directly from digitized images.” Again, this method allows the detection of diseases that are sometimes able to cure if the detection is made at an early stage. Usually what happens is that this detection is extremely difficult to make, however, with the use of this new methods involving AI technology, precision and effectiveness are highly increased.

Even though AI systems clearly do outshine human doctors in reading images such as CT scans, MRI’s and x-rays, as they provide patients with more precise information, it exists the possibility that there is a biased intention behind the algorithms. According to an article published recently in The New York Times written by Cade Metz and Craig S. Smith, “If an insurance company uses A.I. to evaluate medical scans, for instance, a hospital could manipulate scans in an effort to boost payouts” as “by changing a small number of pixels in an image of a benign skin lesion, a diagnostic A.I system could be tricked into identifying the lesion as malignant.”. This reaffirms the idea that regarding the way this technology is handled and who is it handled by, we can answer the to the question if it should be considered as a risk or not. This examples shows how results can be altered in order to benefit the ones with power. This article also goes own talking about how when the time comes and A.I completely takes over the health system businesses will make sure to find the way this technology can bring them the most money possible. In addition, this source is both reliable and convincing as both authors are technology correspondents with The New York Times, meaning they are skilled journalists that work for a well known paper, and they give realistic evidence to support their ideas throughout the whole article.

Is Artificial Intelligence (AI) a Threat to Humanity?

Artificial Intelligence has become a huge controversy between scientist within the past few years. The goal of AI is to simplify life and improve the performance of just about everything around us. But many have asked themselves will artificial intelligence improve our communities in ways we humans can’t, or will they just cause danger to us? I believe that artificial intelligence potentially can improve the performance of just everything around us but it can also be a danger. Even Stephen Hawking predicted the worst in 2014: ‘I think the development of a complete artificial intelligence could end humanity’ The threat would be very serious.

What are the arguments of these whistleblowers? First of all, there are risks to employment: computers are becoming more intelligent and they could go beyond human intelligence. Some even imagine AI to replace humans in intellectual activities. According to a study published by Oxfords University in May 2017, there is a 50% chance that artificial intelligence will outpace humans for just about any task in 34 years.

Another problem is the risk of reinforcing cybercrime but also leading to the use of drones or robots for terrorist purposes: This 100-page report was written by 26 experts in artificial intelligence, cybersecurity and robotics. They belong to universities (Cambridge, Oxford, Yale, Stanford) and non-governmental organizations (OpenAI, Center for a New American Security, Electronic Frontier Foundation).

These experts call on governments and the various stakeholders to put in place parades to limit potential threats related to artificial intelligence. ‘We believe that the attacks that will be allowed by the increasing use of AI will be particularly effective.’ the report said. Experts point out that terrorists could modify commercially available AI systems (drones, autonomous vehicles) to cause crashes, collisions or explosions.

Moreover, ‘cybercrime, already strongly rising, is likely to be reinforced with the tools provided by the IA,’ says Seán Ó HÉigeartaigh, director of the University’s ‘Center for the Study of Existential Risk’. from Cambridge, one of the authors of the report. ‘We have already seen how people are using technology to try to interfere in elections and democracy. ‘

With AI, it should be possible to make very realistic fake videos, and this could be used to question politicians, warns the report. ‘If AI allows these threats to become stronger, more difficult to spot and attribute, this could pose big problems of political stability and perhaps help trigger wars,’ Seán Ó Heigeartaigh said.

However, reducing artificial intelligence to these threats would be a mistake. Because the AI already brings many solutions to the Man. The alliance of Big Data, that is to say, the recovery and analysis of a large number of data, and AI, allows, for example, to better diagnose and cure diseases or prevent climate risks.

So, the AI: danger or luck? In my opinion, it is a little bit of the two, as long as there is a balance between the improvements allowed by technical progress and the regulations needed to protect citizens.

Is Artificial Intelligence Threat or an Aid to the Future of Humanity?

As many of you know a new subject in our day by day is artificial intelligence. Many of us would like to know if this is a threat or an aid to the future of the humanity. There is a lot debate about the intrinsic proper or badness of AI. Yet perhaps the higher hazard in the brief to medium time period is our human dispositions to malicious intent. In any case we need to increase the governance that permits us to have self belief in the safety of AI.

At this point AI is the result of data pushed gaining knowledge of – it has no conscience and can not give an explanation for its reasoning. There is no implicit right or terrible to AI it will in reality reply with results that are derived absolutely through its learning. The suitable or badness of AI will for that reason be based totally on how well we educate the AI, and perhaps most importantly how properly we take a look at the AI.

There have been countless screw ups of AI that increase challenge – the self-learning chat-bot that developed racist and sexist characteristics after solely a few days of being uncovered to the public; and the resume screening AI that filtered out younger ladies on the basis that they had gaps in their employment records (children).

Many implementations of new Technology have been tempered with the aid of organising controls and practices that make the science safe. Historically this has been thru each considerate sketch and responses to disasters. We can examine from the way that science domains like aircraft. If you look at the world today you will see that everyone is using artificial intelligence, at the market, in stores, at home, even when you use the coffee machine to buy a coffee. Even kids in our day are using AI like my daughter who is playing with Furby and my son his tablet. This toy Furby is using AI because you can pet it, it can dance, sing and eat. If you connect Furby to an app on the tablet that toy starts to play and sing with the characters in the app. It uses AI. So yes everyone is using AI from kids to adults. But we must know how to use it because in the future it may become a big problem to our society and a big problem to humanity.

There are a lot of debates in the world, many businessmen are trying to convince us that robots are going to be a big help in the future for us. They want to create robots to look and think like us like the one they have already created, Sophia.

Sophia was created in 2016 by David Hanson from Hanson Robotics and it can talk and act just like a human. It can answer questions that some of the people would not answer and you can have a real conversation with it. This is the first step to human destruction because some of the people, of the big engineers, will not stop at this prototype and they will like to do more and more advanced robots. This is not a good thing for us because robots will take our places. We know that AI it is helpful for us, for our industry, for our day by day life, for our jobs, for medicine, but as Rafael Reif said we must know how to combine AI with our values of life and we must know were to stop in creating this robots, this intelligent machines.

According to S. Makridakis the big impact on of the industrial and digital revolutions has, undoubtedly, been large on virtually all elements of our society, life, firms and employment. By examining analogous inventions of the industrial, digital and AI revolutions, this article claims that the latter is on target and that it would convey vast changes that will also affect all factors of our society and life. In addition, its have an impact on corporations and employment will be considerable, ensuing in richly interconnected corporations with decision making based on the evaluation and exploitation of “big” records and intensified, global opposition among firms. People will be capable of buying items and acquiring services from somewhere in the world the usage of the Internet, and exploiting the unlimited, extra benefits that will open through the considerable usage of AI inventions. The paper concludes that widespread competitive blessings will continue to accrue to these making use of the Internet broadly and willing to take entrepreneurial dangers in order to flip revolutionary products/services into worldwide industrial success stories. The best task going through societies and firms would be utilizing the advantages of availing AI technologies, providing significant possibilities for each new products/services and mammoth productivity enhancements whilst keeping off the risks and hazards in terms of increased unemployment and larger wealth inequalities.

Yes this is a good idea but what if they will not stop in time and they will go further as they did with Sophia. We all know the movie TERMIMATOR, the war between robots and humans, and if we do not stop in time, if we stop believing in our values of life we might get at that point were robots will want to destroy us, the people.

A new book by Robin Hanson, The Age of Em: Work, Love, and Life When Robots Rule the Earth, is reviewed. According to the book the Age of Em describes a future scenario in which human minds are uploaded into computers, turning into emulations or “ems”. In the scenario, ems take over the global economic system by going for walks on speedy computer systems and copying themselves to multitask. The book’s core methodology is the application of current social science to this future scenario, a welcome alternate from the standard perspectives from bodily science, pc science, and philosophy. However, in giving a large tour of the em world, the book from time to time receives bogged down in details. Furthermore, while the book claims that the em takeover scenario would be a appropriate factor for the world and as a consequence should be pursued, its argument is unpersuasive. That said, the e book affords via some distance the most targeted description of the em world available, and its state of affairs affords a wealthy baseline for future find out about of this essential topic.

A robot will never be capable to love, to have real felling towards a human, and it will never be able to do the work of o doctor. Let’s say that robots will enter in hospitals and they will start doing surgeries, it is going to be a good thing because they are precise, but what if the electricity will stop, who is going to save that human? Only a real doctor, a real human will be able to do that, because humans do not need electricity, they can think and they can have ideas in case that something is going to happen. According to Stiglitz a big difference is between AI that replaces workers and AI that helps humans to do their jobs better. It already helps medical practitioner to work extra efficiently. At Addenbrooke’s medical institution in Cambridge, for example, cancer consultants spend much less time than they used to planning radiotherapy for guys with prostate cancer, because an AI device called InnerEye mechanically marks up the gland on the patients’ scans. The medical practitioner process patients faster, the guys start therapy faster and the radiotherapy is delivered with extra precision. Microsoft’s InnerEye undertaking uses AI to make cure for prostate cancer extra efficient. Photograph: Microsoft Project InnerEye StudyFor different specialists, the science is extra of a threat. Well-trained AIs are now better at spotting breast tumours and other cancers than radiologists. Does that imply sizeable unemployment for radiologists? It is not so straightforward, says Stiglitz. “Reading an MRI scan is solely section of the job that character performs, but you can’t easily separate that assignment from the others.” And yet some jobs may also be wholly replaced. Mostly these are low-skilled roles: truck drivers, cashiers, name centre employees and more. Again, though, Stiglitz sees reasons to be cautious about what that will suggest for average unemployment. There is a sturdy demand for unskilled workers in education, the fitness provider and care for older people. “If we care about our children, if we care about our aged, if we care about the sick, we have sufficient room to spend extra on those,” Stiglitz says. If AI takes over certain unskilled jobs, the blow ought to be softened with the aid of hiring extra humans into health, education and care work and paying them a first rate wage, he says.

A new document predicts that until 2030, as many as 800 million jobs could be lost worldwide to automation. The study, compiled through the McKinsey Global Institute, says that advances in AI and robotics will have a drastic impact on daily working lives, similar to the shift away from agricultural societies at some stage in the Industrial Revolution. In the US alone, between 39 and 73 million jobs stand to be automatic — making up round a 1/3 of the whole workforce. But, the document also states that as in the past, technological know-how will now not be a in simple terms unfavorable force. New jobs will be created; existing roles will be redefined; and employees will have the chance to change careers. The undertaking unique to this generation, say the authors, is managing the transition. Income inequality is likely to grow, maybe main to political instability; and the folks who need to retrain for new careers won’t be the young, but middle-aged professionals. The changes won’t hit all and sundry equally. Only 5 percent of modern-day occupations stand to be completely computerized if today’s modern day technology is widely adopted, whilst in 60 percentage of jobs, one-third of activities will be automated. Quoting a US authorities commission from the 1960s on the same topic, McKinsey’s researchers summarize: “technology destroys jobs, however no longer work.” As an example, it examines the effect of the personal computer in the US given that 1980, discovering that the invention led to the creation of 18.5 million new jobs, even when accounting for jobs lost. (The equal may no longer be actual of industrial robots, which beforehand reviews propose wreck jobs overall.) As with preceding studies on this topic, there’s tons to be stated for taking a skeptical view. Economic forecasting is now not an specific science, and McKinsey’s researchers are keen to stress that their predictions are just that. The discern of 800 million jobs lost worldwide, for example, is only the most extreme of feasible scenarios, and the report additionally suggests a middle estimate of four hundred million jobs. Nevertheless, this find out about is one of the most complete in recent years, modeling modifications in more than 800 occupations, and taking in some 46 countries, accounting for ninety percentage of world GDP. Six nations are also analyzed in detail — the US, China, Germany, Japan, India, and Mexico — with these countries representing a vary of economic situations and differently geared up workforces.

So every were we look, to all the statistics, to what every businessmen is trying to convince us they all go in the same direction, the destruction of the humanity. We must think and stop in time, not to end in a war. Even the movies like TERMINATOR and ARTIFICIAL INTELLIGENCE (from 2011) are trying to prevent us, to show us what is going to happen if we go forward, what battles we will have by not stopping to create this intelligent robots. So let’s say stop in time, let’s say stop to robots like Sophia that can take place at our jobs, that can take place of humanity. Let’s not say stop to technology that uses AI, technology that can make our jobs much easier, our lives much easier, not our places.

Ethics for Artificial Intelligence or Machinery

Introduction

In the society we live in, robots and artificial intelligence are quite loved by the media/people for its controversy and mysteriousness. The question of robotic ethics is making a lot of people tense for they worry about the machine’s lack of empathy and even the feeling of sadness that occurs in humans when certain “unkind events” happen to the machine. In this white paper I’m going to talk about how these unnecessary feelings are something that shouldn’t be misguided when we are creating these intelligent machines.

Looking at the facts we know

When engaging in these topics I often look at all the different ways AI is influencing us these days, how the subject of robotic soldiers are discussed and whether or not they should ever deploy them. That is something to be considered an ethical decision by human hands when we are considering the idea of automatic robot war. Of course, there are going to be questions of whether a robot soldier has the capabilities to make the right decisions and that of the use of automatic weapons. Reference: (Yampolskiy, 2013)

Science fiction writer Isaac Asimov was the one who introduced three laws of robotic engineering safeguards and built-in ethical principles that a lot of people these days see in movies, tv, novels, and stories all around. It was either: “A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” Reference: (Deng, 01 July 2015)

Now, these days a lot of his laws are being set in real life. A lot of objects human beings make these days are autonomous enough to make life-threatening decisions. When we talk about self-driving cars we often get to the point of discussing how they would behave in a crisis. What would a vehicle do in order to save its own passengers? What if the self-driving car had to swerve out of the way from hitting a human being but ends up hitting someone else instead in the process?

The problem with Nao

There is an experiment that was quite popular and it was about a toy robot commercial named Nao and Nao was programmed to remind people to take their medicine. Susan Leigh Anderson who is a philosopher at the University of Connecticut in Stamford who together with her husband Micheal Anderson of the University of Hartford in Connecticut talked about how there are no ways how ethics can possibly be involved with a machine like Nao. But Anderson looked at how Nao should even proceed if the patient refuses to take the medicine. But then again. If the robot would in any way force the patient to take the medicine it would vastly see how it could maybe hurt the patient in the process of it.

In order to teach Nao to act correctly in such situations, the Andersons did a lot of tests and learning algorithms with Nao until they found patterns that can guide robots into new situations. A learning robot like this can be really useful for us in many ways. The more ethical situations the robot would get in the better it would get with ethical decisions. But a lot of people have the fear that the advantage might come to a price, which I believe we should when you know that “Principles” is a thing that cannot be programmed with computer code. You will never know why a program came up with a specific rule that decided that something is ethically correct or not. This is something that Jerry Kaplan al mentions in his artificial intelligence and ethics classes at Stanford University in California. Reference: (Anderson, 15-12-2007)

The famous trolley problem

As I mentioned earlier, people often see militarized robots as something dangerous. And there have been numerous debates on whether or not they should be allowed. But Ronald Arkin who works on robot ethics software at Georgia Institute of Technology in Atlanta argues that such machines could be better than human soldiers in some situations. Where robots are programmed to never break any of the rules humans can fail.

We have computer scientist that work rigorously on machine ethics today. They often favor code that uses logical statements like “ if a statement is true, move forward; if it is false, do not move.” Clear and logical statements are according to Luís Moniz Pereira, a computer scientist at the Nova Laboratory for Computer Science and Informatics in Lisbon the best way to code machine ethics. “Logic is how we reason and come up with our ethical choices”- Luís Moniz Pereira

But making such code is an immense challenge. In Pereira notes the logical language that he uses with computer programming still have trouble coming to conclusions when it comes to life-threatening situations. One of the most famous scenarios that have been a loved topic in the machine ethics community is the trolley problem. Reference: (Deng, 01 July 2015)

In this situation you have a runaway railway trolley that is about to hit and kill 5 innocent people who are working on the railway tracks. You can save them only if you pull a lever that makes the train change its direction onto another track. But on the other track, there is 1 innocent bystander. In another set-up, you can push the bystander onto the tracks to stop the trolley.

Now, what would you do?

Often times people answer that it is okay to stop the trolley by hitting the lever, but immediately reject the idea of pushing the bystander onto the tracks. Even though it has the same exact result. This basic intuition is known to philosophers as the double effect. And that is when deliberately inflicting harm is just plain wrong even if it will lead up to something good. But inflicting harm is okay when it is not deliberate but a simple coincidence of good because the bystander simple happened to be on the track

Conclusion

I want to make clear that I am talking about the “ethics” of machines that just weren’t designed right or just simply aren’t finished yet. In this paper I talked about self-driving cars, automatic war machines to robots that remind us to take in our medicine. But the problem I’m trying to get our attention on is the bad designs of these machines and how it can cause bigger problems or greater risks in our safety. These machines should not be let into society if we are not 100% sure they are safe and are programmed correctly.

I understand the excitement about these machines and they surely will bring our technology to a new high in the future. But until then I think we can all agree that we still need to work on our AI before actually using them for making important decisions.

Sources

  1. (Yampolskiy, 2013) https://link.springer.com/chapter/10.1007/978-3-642-31674-6_29
  2. (Hammond, 2016) https://www.recode.net/2016/4/13/11644890/ethics-and-artificial-intelligence-the-moral-compass-of-a-machine
  3. (Keith Frankish, 12 jun. 2014) https://books.google.nl/books?hl=nl&lr=&id=RYOYAwAAQBAJ&oi=fnd&pg=PA316&dq=Ethics+and+Artificial+Intelligence&ots=A0X5wkfGqq&sig=6nyvUpOC5bUEdhOTBRsBPlaRNEY#v=onepage&q=Ethics%20and%20Artificial%20Intelligence&f=false
  4. (Garner, 2015) https://www.nature.com/polopoly_fs/1.17611!/menu/main/topColumns/topLeftColumn/pdf/521415a.pdf?origin=ppub
  5. (H, 07 August 2006 ) https://ieeexplore.ieee.org/abstract/document/1667948/references#references
  6. (c. allen, 07 August 2006 ) https://ieeexplore.ieee.org/abstract/document/1667947/authors#authors
  7. (Anderson, 15-12-2007) file:///C:/Users/Alice/Downloads/2065-Article%20Text-2789-1-10-20090214.pdf
  8. (Goodall, 2014) https://link.springer.com/chapter/10.1007/978-3-319-05990-7_9
  9. (Deng, 01 July 2015) https://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881
  10. (Michael Anderson, 9 mei 2011) https://books.google.nl/books?hl=nl&lr=&id=N4IF2p4w7uwC&oi=fnd&pg=PP1&dq=Ethics+and+Artificial+Intelligence&ots=5XYYqolYMl&sig=ANFdk6e8U_SOpq1s6l_od0f4tHc#v=onepage&q=Ethics%20and%20Artificial%20Intelligence&f=false