Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.
Ever since World Chess Champion Garry Kasparov had lost the game of chess to Deep Blue computer in 1997, the possibility of creation a self-learning Artificial Intelligence (AI) had effectively ceased to be solely associated with sci-fi genre in literature and cinematography, and instead, such possibility became a subject of scientific futurology. What it means is that, it is only the matter of time, before genuinely thinking intelligent machines would be built on the industrial scale.
Nevertheless, as practice shows, there still many people, whose intellectual inflexibility prevents them from recognizing the full validity of an earlier articulated suggestion, which in its turn, extrapolate itself in these peoples strongly defined negative attitude towards the very idea that such machines could be built, by definition. For example, according to Dreyfus (1992), the reason why a machine, endowed with AI, cannot indulge in genuine thinking, is because due to its essentially mechanical nature, such a machine would not be able to actively interact with surrounding realities, which in its turn, would prevent it from gaining an experiential common-sense understanding of dialectical relationship between causes and effects.
Such Dreyfuss suggestion is based upon his belief that human cognition is being of non-computational nature. According to him, it is by being exposed to objects and events situational context that we get to be aware of their qualitative essence: Our global familiarity& enables us to respond to what is relevant and ignore what is irrelevant without planning based on purpose-free representations of context-free facts (p. xxix). Nevertheless, the recent discoveries in the fields of psychology, informational technology, neuro-medicine and genetics, and also the utilization of what author refers to as commonsense logic, renders his line of argumentation conceptually fallacious.
The reason for this is simple the analysis of human reasonings metaphysical and structural subtleties, reveals an undeniable fact that the manner, in which people address existential challenges, is being essentially similar with the manner in which neuro-computers (perceptrons) address a variety of cognitive tasks. In order for us to be able to substantiate the validity of an earlier statement, we will have to explore the issue at length.
Let us say we have a function (8X+10)/9. What would be Y when X equals 5? To be able to come up with the answer, we will have multiply 8 by 5, then to add 10, and then to divide the received number by 9. The consequence of how we proceeded with solving this function is called algorithm. And, the utilization of mathematical algorithms is the fundamental principle, upon which the functioning of a Turing machine is based: Of course, a Turing machine cannot boil an egg, or unlock a door. But the algorithm& is a description of how to boil an egg. And these descriptions can be coded into a Turing machine, given the right notation (Crane, 2003, p. 100). To be compatible with the principle of Turing machines functioning, every task must be algorithmically formalized.
For example, in order for this machine to be able draw the silhouette of persons head, it would have to be provided with formulas, as to the execution of processs consequential phases (the drawing of a straight nose would be described by a linear function, the drawing of a rounded forehead by hyperbole, etc.). And, if error occurs, within the course of a machine executing even one formalized task, the eventual outcome would be wrong in Turing machine, even the slightest error is fatal.
Nevertheless, there is also another way to solve earlier mentioned function, by the mean of constructing a graph, within X and Y-axiss two-dimensional system of coordinates. One might wonder what represents a fundamental difference between these two methods of solving the same function? After all, both methods are concerned with the application of an abstract math. But, let us imagine a situation when we have silhouettes graph but do not have a formula to describe this graph. It would represent a substantial challenge to work out mathematical function, in regards to graphs every situational variable, which in its turn, would allow Turing machine to algorithmically process the drawing of a silhouette.
Moreover, upon having encountered the absence of even utterly insignificant algorithmical data, regarding the process of drawing a silhouette, Turing machine would come to a stall. And yet, just about anybody would be able to reconstruct the missing part of a silhouette with ease if, for example, some maliciously minded individual destroys it with eraser. The reason for this is simple unlike what it is being the case with Turing machine, people are endowed with associative memory, which according to Dreyfus, allows them to gain propositional knowledge.
In its turn, this brings us to the question is peoples associative memory (propositional knowledge) computational? Yes, just as the procedure of constructing a graph on X and Y-axiss scale is. For example, the process of childs upbringing is nothing but the process of childs parents prompting him of her to memorize a number of different behavioral modes (graphs), meant to be deployed in accordance to the qualitative essence of existential challenges, which are expected to be faced by such a child later in his or her life. After having memorized what accounts for these behavioral stereotypes, the child is able to choose in favor of a proper behavioral strategy, when dealing with formally unfamiliar but qualitatively similar (to which behavioral stereotypes apply) situations. And, what is the most important is that, when addressing lifes challenges, while never ceasing to observe earlier memorized behavioral stereotypes, the child also gains a propositional knowledge.
The following, is how in his article Changeux (1994, p. 193) outlines the functional subtleties of peoples artistic cognition: Experimental psychology teaches us& that the memorized configuration is integrated into a highly organized, hierarchical ensemble, a taxo-nomic chart, a system of classification already in existence. What it means, is that in order for an individual to be able to effectively deal with a particular situation, he or she does not have to posses the actual experience of having dealt with the same situation in past. The realization of this fact represents a conceptual foundation, upon which the theorization of what should account for the principle of AIs functioning is based.
As it has been prophesized by one of the most prominent theoreticians of AI, Marshall Yovits (1960, p. viii): It appears that certain types of problems, mostly those involving inherently non-numerical types of information, can be solved efficiently only with the use of machines exhibiting a high degree of learning or self-organizing capability. Examples of problems of this type include automatic print reading, speech recognition, pattern recognition, automatic language translation, information retrieval, and control of large and complex systems. Apparently, Yovits was able to realize a simple fact that, in order for computational systems to attain the full extent of operational efficiency, they should not be programmed (as it is being the case with Turing machine), but to be allowed to indulge in self-learning.
The validity of such Yovitss suggestion has been illustrated during the course of sixties, when Frank Rosenblatt had built the first perceptron, which was able to recognize letters in the typed text. Therefore, it comes as not a particular surprise that Rosenblatts invention is now being commonly referred to as AIs starting point:
Rosenblatts schemes quickly took root, and soon there were perhaps as many as a hundred groups, large and small, experimenting with the model either as a learning machine or in the guise of adaptive or self-organizing networks or automatic control systems (Minsky & Papert, 1986, p. 19). As of today, properly functioning neuro-computers are no longer being mentioned as the element of a futuristic living, but as an integral part of todays highly technological post-industrial realities. And, it is namely the fact that the operating subtleties of neuro-computers are being attuned to the workings of ones biological brain, which explains the phenomenon of their exponentially increasing popularity.
Nowadays, these computers are not being only able to define consistency patterns in processed data, but also to form their own associative memory. Just as it being the case with people, neuro-computers organize semiotic signifiers within semantically structured memory-clusters, which in its turn, allows such a computer to generate associations, during the course of performing a particular computational task. It is needless to mention of course, that this represents another important step towards creating a genuinely thinking machine, endowed with AI.
The context of what has been said earlier helps us to define what accounts for argumentative fallaciousness, on the part of another ardent opponent of an idea that AI can indulge in genuine (human) thinking John Searl. According to this authors line of reasoning, sublimated in his famous Chinese room argument, computer programs can never possess a genuine understanding of processed datas implications, which means that the building of a genuinely thinking intelligent machine is impossible. The quintessence of his argument, Searl (1991, p. 47) defines with perfect clarity: I believe that it is a profound mistake to try to describe and explain mental phenomena without reference to consciousness.
The attribution of any intentional phenomena to a system, whether computational or otherwise, is dependent on a prior acceptance of our ordinary notion of the mind, the conscious phenomenological mind. Apparently, it was due to Searls clearly defined emotional uncomfortableness with the idea that one is being fully capable of indulging in rationalistic reasoning, while remaining unconscious of a number of such reasonings implications, which had prompted him to come up with earlier quoted statement.
And yet, even the brief analysis of qualitative essence of minds neurological workings, refutes the soundness of Searls assumption. After all, history features many examples of ones mind having proven its ability to effectively address cognitive tasks, without the actual consciousness playing any role in the process. The most notable one is the discovery of Periodic Table of the Chemical Elements by Dmitri Mendeleev, which had taken place in time when this famous Russian chemist was sleeping. According to Atkins (1995, p. 86): It is said that during a brief nap in the course of writing a textbook of chemistry, for which he (Mendeleev) was struggling with the problem of the order in which to introduce the elements, he had a dream.
When he awoke, he set out his chart, in virtually its final form. The date was February 17, 1869. Apparently, it is not simply by an accident that, during the time of sleep, ones brain consumes 10% more energy, as compared to the time when an individual is being awake. The reason for this is simple during the nighttime, brain processes the information, accumulated during the course of its daytimes functioning.
Therefore, it is methodologically inappropriate to refer to AIs lack of consciousness, in traditional sense of this word, as the proof its unintelligebleness. Quite on the contrary given the fact that the workings of peoples consciousness are being rather biologically then cognitively defined, the lack of human consciousness, on the part of computers, should be thought of as an indication of their cognitive impartiality, which has always been considered the psychological trait of worlds most prominent intellectuals. And, as history indicates, these intellectuals have always been considered the best part of humanity.
This brings us to address the essence of moralistically minded individuals another objection to the idea that genuinely thinking intelligent machines can be built namely, the fact that, as these individuals believe, computers will never be able to experience the whole set of human emotions. As Dewhurst and Gwinnett (1990, p. 695) had put it: Given that human intelligence is so emotionally complex that it cannot be fully replicated, all that AI research can actually achieve is to model particular aspects of human intelligence in relation to specific domains. But, what are emotions, both: positive and negative love, hate, fear, anger, joy, sadness, etc.? Emotions are nothing but the agents of ensuring our biological well-being, as representatives of Homo Sapiens specie. To put it allegorically they are the sticks and carrots that induce environmentally appropriate behavior not only in people, but in animals, as well.
An individual, as energetically open system, enjoys certain freedom in decision-making, and emotions are there to make sure that these decisions will not undermine the extent of such individuals biological survivability. When, for example, we make love, such our activity has the objective of ensuring the spread of our genes, and for this, we get to be rewarded with the whole range of positive pleasure-inducing emotions. Alternatively, when we are injured, we get to experience pain, simply because pain is nothing but a warning sign that there is something wrong with our bodies physical state.
Even though it has been long time ago, since our distant ancestors have claimed down the trees, in search for additional sources of food hence, creating objective preconditions for the eventual emergence of Homo Sapiens specie, the biochemical workings of our bodies never ceased being essentially the same with that of apes. Just as it is being the case with all the primates, people strive to love and to be loved, to attain social prominence, to enjoy good-tasting food, to spend as much time as possible relaxing and as little time as possible working, etc. all of these emotion-induced activities are of clearly defined animalistic nature. This is exactly the reason why it is possible to define the essence of an emotion, experienced by an individual at particular point of time, by assessing such emotions physiological emanations.
As it was rightly noted by DeLancey (2002, p. 10): Affects, especially some emotions, have noticeable and measurable physiological correlates& For emotions, many more measurable physiological changes occur. Depending upon the intensity of the emotion, these can include changes in autonomic functions, such as heart rate, blood pressure, respiration, sweating, trembling, and other features; hormonal changes; changes in body temperature; and of course changes in neural function. Therefore, under no circumstances, should human emotions be referred to as the mark of peoples higher humanity. Instead, they should be referred to as what they really are the indication of the fact that, while dealing with lifes challenges, people never cease remaining utterly constrained, in biological sense of this word.
What has been said earlier, directly relates to the subject matter that is being discussed in this brief. The genuinely thinking intelligent machine, that we propose to be built, will not be able to experience human feelings, of course. This, however, should not be thought of as the proof of its cognitive inferiority; simply because, unlike what it is being the case with humans, our machine will not utilize biochemical but electronic mechanism of interacting with surrounding realities. And yet, it is specifically this mechanism, which appears to be perfectly attuned with the actual workings on human brain.
After all, ones mind does not operate with digits and formulas, while assessing the emanations of a surrounding environment. In a similar manner, computer does not operate with digits and formulas per se, but with electronic signals. The only difference between human brain and computer-based AI is that; whereas, human brain generates electricity from within, computerized brain requires an outside source of electricity pure and simple.
Yet, human brains energetic portability comes at the expense of its computational powers being severely undermined. After all, a good half of their lives, people spend taking care of their bodies biological needs. Our genuinely thinking intelligent machine, however, will not need to indulge in clearly physiological pursuits, just to ensure its continuous existence, which in its turn, will not only increase its computational powers, but will also result in the dramatic increase of its computational insights validity.
Even today, the cognitive outdatedness of a human brain appears to be a well-established fact. Such brain contains 10 billion neurons, which construct ones memory and function as the logical elements of perception. Due to the chemical essence of these elements functioning, brains computational performance cannot be referred to as utterly effective. For example, within ones brain, electronic impulses are being transmitted at the speed of 30 km per second. This, of course, cannot even be compared with the speed that electronic impulses are being transmitted within a microchip 300.000 km per sec.
Therefore, it appears to be the matter of foremost importance to adopt a proper perspective on what should account for our machines metaphysical significance. We do not perceive such a machine as simply robot, endowed with AI, which can be utilized as peoples life-enhancing asset, but as something that might very well bring about the next evolutionary jump from biological intelligence, represented by Homo Sapiens specie, to a pure trans-human intelligence, which will not be biologically constrained.
Even today, there are many indications that point out to the fact that trans-human revolution is just around the corner. For example, within the matter of another decade or two, it will become practically possible to install microchips into peoples brains, which would allow them to instantly learn new languages, to upgrade their memory and even to go as far as saving their consciousness (individuality) onto computer hard drives. Therefore, our willingness to apply extra effort, while creating genuinely thinking intelligent machine, should not be thought of as simply the proof of our intellectual open-mindedness, but as an indication of our existential status being nothing less then that of demi-Gods, because by establishing objective preconditions for such machines creation, we intentionally facilitate the course of evolution.
As it was pointed out by Kurzweil (2005, p. 476): Evolution moves toward greater complexity, greater elegance, greater knowledge, greater intelligence& Evolution does not achieve an infinite level, but as it explodes exponentially, it certainly moves in that direction therefore, the freeing of our thinking from the severe limitations of its biological form may be regarded as an essentially spiritual undertaking. Therefore, our intention to create genuinely thinking intelligent machine should indeed be regarded as essentially spiritual enterprise, even though it has nothing to do with the notions of conventional spirituality whatever the ironic it might sound.
Before we conclude this brief, let us reinstate its foremost theses:
-
There are no good reasons to believe that, due non-biochemical principle of its functioning, AIs perception of surrounding realities may be cognitively deficient. On the contrary because it is being freed of a number of biological constraints, AI will be able to attain a qualitatively new level of these realities understanding.
-
The suggestions that there is a fundamental difference between the cognitive functioning of a human mind and that of artificial neural networks is conceptually fallacious, simply because, in both cases, it is specifically the flow of electrons, which serve as informational medium. Just as it is being the case with people, neuro-computers have proven their ability to indulge in associative reasoning. In its turn, such reasoning has always been considered the attribute of a higher intelligence.
-
In order for the proposed machine to be able to proceed with genuine thinking, it does not have to be conscious of the process, in conventional sense of this word. After all, most people are being rarely conscious of what prompts them to decide in favor of crossing the street, or to wait until there are no incoming cars nearby, before they do it their intuition simply allows them to configure what accounts for their chances of not being hit by the car, while crossing the street. And, peoples intuition is nothing but their ability to unconsciously reconstruct the missing parts of a graph, without having to apply mathematical functions, just as neuro-computers are able to do. It is specifically such peoples ability, which accounts for their intelligence per se, and not their tendency to assess the essence of surrounding reality through the lenses of their emotions, as Dreyfus and Searl would like us to believe.
-
The building of genuinely thinking intelligent machine may very well trigger the initial phase of trans-human revolution. Given the aspects of todays living, which derive out of growing inconsistency between peoples ability to push forward scientific progress, on one hand, and their biological imperfection, on another, the beneficial effects of such a revolution can hardly be underestimated.
References:
Atkins PW 1995, The periodic kingdom: A journey into the land of the chemical elements. Basic Books, New York.
Changeux, JP 1994, Art and neuroscience, Leonardo, vol. 27, no. 3, pp. 189- 201.
Crane, T 2003 [1995], The mechanical mind. A philosophical introduction to minds, machines and mental representation. 2nd ed. Routledge, New York.
DeLancey, C 2002, Passionate engines: What emotions reveal about mind and artificial intelligence. New York, Oxford University Press.
Dewhurst, FW & Gwinnett, EA 1990, Artificial intelligence and decision analysis, The Journal of the Operational Research Society, vol. 41, no. 8: pp. 693-701.
Dreyfus, HL 1992 [1972], What computers still cant do: a critique of artificial reason. The MIT Press, Cambridge.
Kurzweil, R 2005, The singularity is near: When humans transcend biology. Viking, New York.
Minsky ML & Papert SA 1986. Perceptrons: An Introduction to computational geometry, The MIT Press, Cambridge.
Searle, JR 1991, Consciousness, unconsciousness and intentionality, Philosophical Issues, vol. 1, pp. 45-66.
Yovits M, and Scott, C (eds) 1960. Self-organizing systems: Proceedings of an interdisciplinary conference, 5 and 6 May, 1959, Pergamon Press, London.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.