Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.
Introduction
In this current era, interaction between humans and computers has increased. Humans are surrounded by computers, so the question arises that how well humans and computers are mixing with together? Are they like oil and water which is a difficult match or they are like peanut butter and jelly which is a satisfying match indeed?
Imagine that while you are interacting with a banking ATM machine you become confused and inadvertently transfer a big amount to someone else’s account. Your money might be hard to get back.
Or, imagine that you are buying via Amazon that pair of shoes that you wanted and while making the purchase on your smartphone you mistakenly order 10 pairs rather than 1 pair of shoes. Guess you will end up having 10 pairs of same shoes for you.
Our daily lives are increasingly becoming dependent upon our interaction with computers, whether those be smartphones, tablets, PC’s, or the emerging Internet of Things (IoT) which will make computing ubiquitous and we will be immersed and surrounded by computing everywhere.
As it is important to study that how humans interact with animals or even each other, it is equally crucial to understand how humans and computers interact. The field of Human-Computer Interaction focuses on understanding the ways in which humans and computers interact, and attempts to advance our understanding so that computers and humans can work more closely in harmony with each other.
The gap between what a human is expecting or instructing the computer and what the computer is doing when human-computer interaction occurs can be at times quite serious and have terrible consequences.
Perhaps the most famous of the awful consequences examples consists of instances wherein a human pilot and a computer-based autopilot HCI mismatch led to fatal airplane crashes. One such example will be carefully examined to illustrate the nature of the cognitive dissonance that can occur between a human and a computer when the two are interacting with each other.
First, let us have a look at the role of cognitive dissonance in human interactions.
Cognitive Dissonance
In the study of human-computer interaction “Cognitive Dissonance” is an important topic. Let’s try to visualize the cognitive activity that takes place when two people are having an interaction with each other. Suppose two people are involved in the purchase of a car, one of them is the potential buyer and other is the potential seller. The buyer has in their mind that what kind of car they want, how much they are willing to pay, and other facets about making the purchase of the car. The seller has in their mind the worth of the car and is trying to identify how to sell the car to the buyer. So, the buyer has a mental state about wanting to buy a car, and the seller has a mental state about wanting to sell the car.
Both the buyer and the seller also have mental models about each other. Namely, the buyer has a mental model about the seller, trying to figure out what is in the mind of the seller as to how much are they willing to come down on the price of the car and how desperate are they to sell the car. Equally, the seller has a mental model of the buyer, trying to figure out how much the buyer is willing to pay and whether the buyer is serious about making the purchase or just playing around and not a true buyer.
Suppose that the buyer has in mind that she is willing to pay ₹150,000 for the car, and the seller has in mind that he is willing to sell the car for ₹150,000. The two briefly discuss the car and then amicably agree to the transaction. This is an example of very little cognitive dissonance since they both perceived cognitively in quite similar ways the car purchase and selling.
Suppose instead that the buyer has in mind that she is willing to pay only ₹100,000 for the car and that the seller has in mind that the lowest price for the car is ₹150,000. The two are now at logger heads in that they have a large disparity between their mental models.
Exasperating the matter, suppose further that the buyer is the type of person that likes to have protracted negotiations and relishes the game of bargaining, while the seller is the type of person that hates to negotiate and wants to just get to the point and move on. You can see that this attempt to have the two interact is going to be challenging since they not only have different views about the car purchase but they even have divergent views of how to approach the interaction.
We can make this even worse by adding to the interaction that the buyer believes that the seller does not want to sell the car at any price and will be resistant to the interaction, while we can add that the seller believes that the buyer is a hot head and will react adversely to the slightest provocation during the car purchase interaction.
As perhaps is evident, the cognitive mental states of the buyer and the seller are radically at odds, and each has their own views not only about what they intend but also what they believe the other party intends. This cognitive dissonance showcases that the chances of the interaction occurring smoothly is unlikely and there will be an awkward and ultimately possibly dissatisfying conclusion to the interaction.
Large gaps in cognitive dissonance can cause the human-to-human interaction to breakdown and lead to adverse consequences.
Now, substitute for the human-to-human interaction instead a human-to-computer interaction. The human in the human-to-computer interaction also has in their mind a mental model about the task, and also a mental model about what the computer “believes” about the task. Likewise, the computer has a “mental model” about the task, and a mental model of what the human “believes” about the task.
Computers are not at all like the mental capabilities of humans. Currently, the computer has been programmed by a human or humans that have incorporated into the computer various assumptions about the way in which it should “mentally” process tasks and also what assumptions are to be made about the human interacting with the computer.
Anyone that says “the computer did this or that” is falsely ascribing to the computer a human-like quality which is not the case today. The computer as programmed by a human or humans did this or that, and thus it is not the computer per se that has some particular responsibility but instead those that programmed the computer. I do not want to digress and get into the whole topic of whether computers can or will have their own sense of consciousness — and so will just for the moment alert you to be careful when anthropomorphizing computers today.
Famous Disaster Due to Cognitive Dissonance
In 1994, Aeroflot Flight 593 was flying from Moscow to Hong Kong when a sad and frightening example of cognitive dissonance occurred that led to a fatal crash of the plane and killed all 75 on-board the flight. The cognitive dissonance involved a series of cognitive mismatches between what the pilot and co-pilot thought was happening and what the computer auto-pilot “thought” was happening. Do not assume that this is a uniquely odd occurrence as there are many documented instances of cognitive dissonance incidents that have led to planes faltering and on occasions crashing.
The Aeroflot flight was flying along smoothly and the auto-pilot was on. The pilot opted to have his children come visit him in the cockpit, and his son sat at the co-pilot seat to pretend that he was helping to fly the plane. Turns out that the son applied significant force to the flight control column, and, regrettably, via how the auto-pilot worked, this exerting of force was a signal to the auto-pilot to allow the “pilot” to overcome the auto-pilot and switch the ailerons to go into manual control. Notice that at this moment of the flight that the auto-pilot was still overall in control of the plane, but had relinquished the aileron control to be handled by the human pilot (as per how the auto-pilot had been programmed to operate).
Though a silent indicator light came on at the flight dashboard, intending to echo to the pilot that the aileron is now in their control, it did not have an audible alert (which was common in other planes), and unfortunately, the pilot and co-pilot did not notice that the indicator light had come on.
With the flight control having been adjusted accidentally and unknowingly by the son, the plane started to bank into a 180-degree turn. The pilot realized that for some unknown reason the plane was banking, but for about nine seconds the pilot and co-pilot were baffled by the turn, not being able to figure out why the plane was banking and what the auto-pilot was trying to do. The plane started to lose altitude due to the manner in which the turn was taking place. Nine seconds might seem like a short amount of time, but not when you have a plane flying through the sky at full speed.
The auto-pilot detected that the plane was losing altitude and tried to use the other non-aileron controls to compensate for the problem arising. It pitched the nose of the plane up and tried to do a steep climb, but this led to the plane stalling in air, and another automatic system then pitched the plane downward to get out of the stall.
The co-pilot then took over from the auto-pilot and tried to push the plane upward to get out of the nosedive, but this again caused the plane to stall. Heading into a corkscrew dive downward, the pilot and co-pilot were unable to sufficiently recover the plane and it crashed, killing all 63 passengers and 12 crew members.
Experts that analyzed the incident indicated afterward that had the co-pilot let the plane’s auto-pilot try to get out of the final nosedive that it probably would have been able to do so, thus, the co-pilot inadvertently seemed to have contributed to the plane crashing by ironically taking off the auto-pilot and trying to take over the control of the plane.
I hope that you can vividly see how the human and computer interaction in this case is a showcase of cognitive dissonance.
The pilot and co-pilot had a mental model of flying, and the flight status, and a mental model of what the auto-pilot can do, and what the auto-pilot was doing during the flight. When the son inadvertently turned off the aileron control of the auto-pilot, the son was not aware that he had done so, and the pilot and co-pilot were not aware that it had occurred (in spite of the light indicator that came on). Notice too that the auto-pilot did as it was programmed to do, namely allowing the “pilot” to take over the aileron controls, even though in this case the pilot was not actually wanting to take over the ailerons.
The pilot realized that the banking turn did not make sense for his mental model of the flight – the flight should have been proceeding on a level course straight ahead. He could not imagine why the plane was suddenly taking a banking turn. We can guess that he probably searched his own mind trying to think about what would cause such a banking turn. It seems unlikely that he might have guessed that it was due to his son exerting force over the control column. The pilot might have thought it was a mechanical failure on the plane, but if that were the case then he probably was thinking that why didn’t he see other indicators alerting him about the plane condition. He probably assumed that the auto-pilot would not have initiated the banking turn because the auto-pilot was supposed to be flying straight ahead.
The auto-pilot was programmed to try and overcome the initial diving action of the plane and was not presumably aware that the pilot was now trying to take action. Back-and-forth the mental gaps occurred, and we can see that the mental model of the pilot and co-pilot was disparate from the “mental model” of the auto-pilot. This cognitive dissonance created a severe and catastrophic gap over the control of the plane.
One reaction to this incident might be to declare that the pilots were wrong to have allowed the auto-pilot to have control of the plane and they should have never engaged the auto-pilot.
This is an extreme perspective in that it assumes that only the human should do the task and that the computer cannot sufficiently provide assistance.
Another reaction to this incident is that the auto-pilot should be given complete control of the plane and therefore presumably avoid the frailties’ of the human pilots. Some would say that had the auto-pilot been fully in control, the son could not have caused the switch to human control, and so the incident would have never occurred.
This is another extreme perspective in that it assumes that only the computer should do the task and that the human cannot sufficiently provide assistance.
This is a false dichotomy.
It is a simplistic and myopic viewpoint to assume that in this Human-Computer Interaction that the “solution” to the problem would be solved by pushing everything onto the human or pushing everything onto the auto-pilot. Auto-pilots provide a valuable contribution to the flying of modern-day airplanes, and likewise, the human provides a valuable contribution to the flying of modern-day airplanes. Having the human-only fly the plane is not a reasonable approach in today’s world of flight complexities, and having the computer-only fly the plane is not a reasonable approach given the limits to today’s computer capabilities.
We must be more mindful about the HCI dissonance and how humans and computers interact.
Framework for Examining HCI Dissonance Gap
To illustrate the HCI dissonance gap, I provide in Figure 1 a four-square diagram that I believe helps to illuminate crucial aspects about how humans and computers interact.
On the left side of the four square, there is an indication of the potential risk to humans, ranging from a high risk (such as leading to death, akin to the Aeroflot flight) to a low risk (imagine that your spell checker mistakenly corrects a word that it thinks you misspelled but that you had purposely spelled the way that you intended – this is a dissonance gap, but a likely minor one!).
At the bottom of the four square, there is an indication of the gap distance, ranging from low (when the human and computer are relatively closely aligned) to high (when the gap between what the human is thinking and what the computer is “thinking” are at dramatic opposites).
There are four squares, consisting of High-Low on the risk factor, and High-Low on the dissonance gap.
Let’s take a look at each of the squares.
If the dissonance gap is Low-Low then this means that what the human thinks and what the computer “thinks” are relatively well aligned. Though we could try to push the two toward each other to try and ensure that there is no gap at all, the economics of making that push is probably not of a sufficient cost/benefit ROI (Return on Investment) that making that push is worthwhile to do. Assuming that the Low-Low gap is indeed minimal, we could say that the Human-Computer Interaction can remain “as is” and does not need to be adjusted.
If the dissonance gap is High-High then this means that what the human thinks and what the computer “thinks” are relatively disparate of each other. The likelihood then of problems arising are heightened, and given that the risk to humans is high, we should look carefully at what can be done to close the gap. It is probably the case that we would want to push both parties closer toward each other. We would want to adjust the computer so that it is better aligned, and we would want to somehow “adjust” the human so that they are better aligned. The economics of doing this are probably worthwhile, and especially when you consider incidents like the Aeroflot flight (in other words, circumstances where the gap can lead to death and destruction).
If the dissonance gap is High for risk to humans and Low as a gap, the High-Low square, typically the economically viable solution is to push the human toward the computer in terms of alignment. This might involve added training for the human or taking some other steps to ensure that the human is more mentally aware of and engaged in the task with the computer.
If the dissonance gap is Low for the risk to humans and yet High for the gap, typically it is more economically viable to push the computer toward the human in terms of alignment. This might involve some kind of reprogramming of the computer or otherwise altering the nature and involvement of the computer in this task with the human.
Remember the mentioning earlier of the false dichotomy of some believing that only the computer should do the task or only the human should do the task?
In Figure 2, I illustrate this notion of a false dichotomy. I show that such a viewpoint is thinking about only the High-Low square or the Low-High square, but is not also thinking about the High-High and the Low-Low squares.
It is crucial to avoid the trap of falling for the false dichotomy.
In Figure 3, I show the same four-square framework and have put the “As Is” into each of the squares. This is indicative of circumstances whereby there is misalignment between the HCI and yet there is no action taken to deal with the misalignment. The chances are that this head-in-the-sand kind of approach is going to ultimately lead to something disastrous since for the High-High, High-Low, and Low-High squares we are sitting on the edge of the razor, and at some point, something bad is probably going to occur. This is the proverbial “don’t know what you don’t know” which sometimes happens when there is not a proper analysis done of a human-computer interaction.
In Figure 4, I show the four-square framework and now have put that in each square we will try to push the human more toward the computer, and the computer more toward the human. Though this is an ideal approach, it is economically often not viable. The costs involved to push the human toward the computer, or to push the computer toward the human, might not be ROI attractive to do, and usually, the risk to the humans will help to ascertain what this economic trade-off is like.
When HCI is slanted or skewed toward the computer, we always need to be aware of and generally suspicious to make sure that the human is not overly far out of the loop. Shown in Figure 5 is a diagram that depicts a skewing toward the computer. The reason to be suspicious is that we need to ask whether the computer can really handle the full range of circumstances that might be faced in the task and whether it can sufficiently handle those circumstances.
Let’s take the example of self-driving cars, an exciting and emerging use of computers. The intent is to ultimately relieve the human of having to take any action of driving the car. During our efforts to get to that pot of gold at the end of the rainbow, we need to be careful that we don’t falsely believe that the computer can do more than it can really do and that if we carve out the human entirely then to what degree are we creating risks for the human. This is not to suggest that we will not likely eventually get to the point of having no human involvement in a self-driving car, but we need to be careful to not jump ahead of ourselves and omit the human entirely prior to the point at which doing so makes reasonable sense.
Figure 6 shows the circumstance of HCI involving the human being the dominant performer of the task and the computer playing only a minor role. Humans are not infallible, and so having the computer be more involved might be beneficial as it can possibly mitigate the human foibles of the task. Or, it could even be simply that we want to relieve humans of performing the task for purposes of letting the human do something else or not be concerned about the task.
Self-driving cars are trying to alleviate the human of having to drive a car. This can be beneficial because the human could possibly use the driving time to instead focus their cognitive efforts on something else, maybe on their work efforts as they are heading into the office, or maybe for entertainment if joined in the car by fellow passengers and wanting to act as though they are in a cab that has a driver taking care of the driving task. In addition, self-driving cars are being justified on the basis of the number of car accidents that occur when driven by humans and the potential for reducing such incidents, saving lives, and reducing costs associated with our driving of cars.
Figure 7 shows a diagram of having the human and computer relatively closely aligned. The amount of gap is minimal between the two, and each has its own contributions toward the task, and the two are well aligned.
Relieving The Dissonance Gap
At the CDT forum that I attended, Nicholas Carr brought up the example of radiologists, medical doctors, and related medical professionals, and the examining of X-rays and MRIs for trying to diagnose diseases such as when looking for indications of say cancer.
Prior to the advent of computer analyses of X-ray images and MRIs, the human radiologist would need to look at the images and try to on-their-own figure out what maladies might be indicated. Computers have been increasingly used to also undertake these same kinds of analyses.
Some studies indicate that radiologists are reluctant to either use or even trust the computer analyses and so will at times ignore of discount whatever the computer analysis shows.
One solution voiced to this misalignment was to not allow the radiologist to at first see the computer analysis, and thus have the radiologist do their own image analysis first. Presumably, after doing so, the radiologist could then take a look at the computer analysis and use it as a kind of “collegial” second opinion.
This approach tends to border on the false dichotomy that has been have discussed earlier in this piece.
This idea of having the radiologist do their own analysis first, and then distinctly and separately use the computer analysis, will have other potential adverse consequences. Studies show that often an expert will anchor to their own opinion, and so the radiologist upon seeing an image might form an opinion that then no matter what the computer analysis secondarily indicates they will ignore or discount.
Furthermore, it is already well known that radiologists are often faced with mind-numbing case loads, along with urgency for doing the analysis, and that they often suffer from radiologist fatigue.
By shifting the computer to the back-seat of this task, we are not likely helping to overcome any of those factors of vast case loads, urgency to analyze, and fatigue. If anything, it would probably just make those factors worse, since the radiologist would be essentially doing the task twice, once on their own, and then again but then with the use of the computer.
A more satisfying approach would be to consider how to help seamlessly align the human and computer in this Human-Computer Interaction. For example, we might have the computer showcase its analysis on the image so that when the radiologist first sees the image that it does not dominate the image and allows and makes the radiologist perceive that they (as the human) are performing the task, but that it also is being augmented by the computer in a more subtle fashion.
We could even add a dose of serious gamification by perhaps having the radiologist consider the computer as a type of “game” in terms of what the radiologist discovers in the image versus what the computer discovers. When I say the word “game,” be aware that I am not suggesting this is not a very serious task, and I fully acknowledge and agree that this task has the potential of great risk to humans (imagine a misdiagnosis that fails to detect cancer, and yet the patient has cancer and so is not aware to take action accordingly). The use of gaming techniques can be done in a serious way, and actually, be quite beneficial since it can increase the human engagement involved.
Besides the potential for a false dichotomy perspective when trying to solve HCI dissonance gaps, another approach that some are advocating is a forced engagement between the human and computer on a randomly activated basis.
For example, some might suggest that with the pilot and auto-pilot, we ought to have the auto-pilot periodically and randomly hand control of the flight back over to the pilot. This is being done solely to keep the pilot engaged, and not because the auto-pilot has reached a point wherein it cannot properly control the flight.
Though at first glance this maybe seems sensible, kind of like a wake-up call for the pilot, this approach will have adverse consequences. The pilot will be on-edge as to when that next random hand-over is going to occur, and so it will be unlikely that any reduced stress on the pilot is going to happen during the auto-pilot efforts. Also, imagine that the auto-pilot does purposely want to hand over the flight because of some anomaly that has occurred, and the pilot might become momentarily confused or even lulled into less attention because they are expecting the next random handover to be taking place (which is not an emergency situation).
Carefully consider too the cognitive load that we are placing onto the pilot. One moment their mental processes are not especially on the flight, and the next moment by surprise (due to the random prompting) they have to mentally engage. This on-again and off-again effort of mental exertion might actually produce heightened pilot fatigue, ultimately leading them to be even worse at piloting once the flight is entirely handed over to them such as perhaps when landing the plane.
We need to be watchful of seeking overly simplistic solutions to complex HCI arrangements.
HCI as The Silent Killer
The lack of general awareness about the importance of HCI in today’s computer systems is alarming because we are all increasingly becoming dependent on computers.
Often, the rush to get a new computer system out the door does not incorporate sufficient attention to Human-Computer Interaction. Even if there is some attention, it is frequently performed by programmers that are not necessarily trained in the HCI facets. They then believe that they have done a good job of encompassing HCI, but then often upon fielding are horrified to discover that there are misalignments that they never envisioned.
Economics comes to play in this HCI focus too since there is “added” cost to being careful and thoughtful about the HCI aspects, though the benefits of that added cost can often well exceed the added costs. Firms that find themselves being sued and giving up large monetary awards for systems that did not have thoughtfully prepared HCI find out afterwards that they underestimated the value of HCI. There are often programmers that wanted to deeply do HCI, but the budget for doing the system did not include the needed expenditure.
Often a new app or computer system will land like a dud in the marketplace and a retrospective often shows that it is due to a poorly done HCI. In contrast, good and really good HCI’s are being increasingly expected by consumers and businesses, and so the poorly designed HCI’s won’t last. Part of the tremendous success of Uber has been partially attributed to the HCI that they put in place, and which allowed humans to more easily and in a frictionless way call for cab-like service in a manner that had not been made available widely before.
Next time that you are engaged in an interaction with a computer, think about what the computer is “thinking” and what you are thinking and see if there is a dissonance gap in that HCI.
Then find a way to deal with the HCI dissonance gap, thoughtfully and with purpose.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.