Minimization of Biases and Increasing of Objective Decision Making

Decision making is an important business function which is prevalent within every process at every level of an organization. It is largely dependent upon support from accurate information and data to successfully maintain effective and rational decisions on the basis of analysis of data and information presented. The results of analysis become the foundation for the decision being made. However, the prevalence of bias and errors within the data can result in erroneous decisions due to the analysis providing skewed results. Data analysis is based on data and facts which are objective, but the analysis of data is sometimes biased and inaccurate due to heuristic errors (Gibbons, 2015). Heuristics are a short cut memory method or thumb rules that people use to make faster logical decisions, (Gibbons, 2015). This is a major risk associated with data driven decision-making rendering it inefficient until and unless adequate consideration for sensitivity of data to the buyers and its effect on the data is identified and rectifications are made accordingly.

Various forms of bias exist within every process and is easily identified within marketing and political campaigns, but may not always be so obvious. Certain types of bias may be difficult to identify and “multiple biases exist within a single data set” (Crawford, 2013). It is essential that every business leader educate themselves regarding prevalence and management of bias within data in depth rather than depending upon data scientists to provide information and guidance, as finally the “business leaders are responsible for the decisions they make on the basis of this data and need to face the repercussions for wrong decisions based upon biased data” (Crawford, 2013).

Confirmation Bias is a cognitive bias resulting from personal assumptions, opinions a hypothesis of individuals and unintentional or intentional desire existing to prove this to be true (Gibbons, 2015). The data becomes largely biased by confirmation bias existing during data collection or data being proved on the basis of what feels right rather than on the basis of true fact. It is essential that all statistical data collection and analysis be absolutely an objective process without the involvement of personal opinion emotions and providing minimal flexibility for data collection and processing. However, the inherent scope for wearing the process in terms of the variables used to calculate and the method of calculation itself can provide the scope for confirmation bias. It can result simply from collection of processing data by an individual who has been “processing it on a constant basis and is likely to have high levels of preconception with strong opinions and assumptions on the data processing and collection” (Morgan, 2019). For example, random sampling errors can be used in an extensive manner to misrepresent facts and figures or to control the output results acquired by manipulation of the input data set. Confirmation bias can also be transferred from business leaders for data analysts who are involved in the actual data processing expecting the reports to be in conformity with their opinions and expectations. Confirmation bias transfer can result in erroneous decision with extensive damage being “costly if not identified and controlled in the initial stages by ensuring that all individuals responsible for data processing are free from preconceived assumptions, opinions, and not under the influence of leaders having the capacity to enforce confirmation bias transfer”.

Selection bias results from input data being selected subjectively rather than in and objective man utilization of non-random data sets leading to statistical analysis results not to representation of the entire population. Selection bias can be the result of organizations failing to capture the entire picture to accurately represent the segment of organizations or a target group which is the deciding factor for the specific decision to be made. Data is critical for business decisions, and until data is accurate and with minimal contamination, the result can be inaccurate analysis resulting in inefficiency with even poor standards of quality being implemented. To minimize risk from selection bias, it needs to be ensured that input data selected is random with absolute representation of the entire population of the data.

Outliners are data averages with the result of the simple average greatly flawed due to presence of extreme data values measuring significantly above or below the normal range of values included or largely deviates from values within the normal distribution pattern. They can “dangerously alter the results with inexperienced data scientists or managers ignoring the impact such values can have on the average or mean figures apply the basic formula for mean calculation without analysis of input data out of habit or as a matter of procedure”. Outliers can be extreme enough to completely alter the results of an analysis, and if adequate consideration is not provided to minimize impact of search extreme data values by removing them before calculating averages or making a simple adjustment to bring them in alignment with the rest of the data distribution, the decision based on the output can result in serious damages. Large data sets may resist manual checking for outliers and may require specific measures being adopted to minimize impact from the existence of such a large data values within the data sets used for decision-making. Every organization need to adopt the most suitable procedure for handling outliers within the data set and introduce the process as it may not be always prudent to ignore outliers in certain organization such as insurance fraud. This is the reason behind various types of research, including medical findings reporting contrary results at different points in time and various promotional campaigns which are seemingly successful finally proved to be not very successful. “This is one of the most common biases in data analysis and results from use of descriptive statistics or data visualization leading to bias and wrong decision when working with big data” . For example, a promotional campaign may provide a minimal return on investment when incentivization of customer is based on erroneous conclusions. The greater and more in-depth analysis is essential for validating any trend which is visible for a marketing campaign to be sure that it is true by supporting in-depth analysis of all related factors. Ignoring this risk minimizing factor result in a promotional campaign reducing margins and in the long-term increasing the loss incurred from the field campaign instead of achieving the desired result of driving sales up and maximizing profit. Any initial trend means adequate validation for the supporting analysis through testing of control group measurements to ensure that the trend being visualized is correct. Current marketing analytic tools provide multivariate testing for adequate levels of aggregation been achieved through slicing and dicing of available data. This can however, result in averages which can be erroneous or misleading as it is not necessary to find a level of aggregation which does not need to be validated “if contradictions are not identified on an immediate basis”. In such cases the tendency becomes to follow your instinct to validate and use personal opinion to believe that the trend is true.

Therefore, it is extremely important that all data and analysis utilized for critical decision making within businesses is checked for accuracy and the entire process is controlled and conducted in a manner to minimize bias through implementation of processes and procedures to ensure elimination of bias in decision-making.

The Ten Percent Myth

The human brain is intricate and still very strange. It is probably why many myths about the mind and its functions always come about. One of the most well-known of these legends is often alluded to as the 10% of the brain myth, or the idea that an individual only uses an extremely small amount of their brain in everyday life. I am not sure where this myth could have derived from, but the ten percent myth probably emerged from the neurological research in the late nineteenth century or mid-twentieth century. The media has assumed a major job in keeping this myth alive. A lot of advertising refers to the 10% legend as certainty, probably to complement the potential clients who consider themselves to have transcended their brain’s constraints. You can also see the myth being used in Lucy, a movie about a person who gains abilities once she passes the ten percent threshold of her brain. The famously and broadly spread conviction that we only use, or approach 10 percent of our mental ability is regularly used to hypothesize about the degree of human capacities if we could use our cerebrum’s full limit. In actuality, the 10 percent guarantee is 100 percent myth. We use most of our minds. The main cases where the districts of the mind are not used is when there have been brain injuries.

I believe this myth has stayed alive for so long because people refuse to look at the evidence that goes against the myth. They only want to see the evidence that supports what they believe instead of looking at the facts. Confirmation bias is when people ignore the evidence against something that they believe in. When it comes to this myth, I believe many people have confirmation bias. They look for evidence to support their beliefs instead of looking at the truth. Many people may also have a bias because they take mental shortcuts and immediately believe what they see or hear instead of doing research on the topic. They use heuristics which reduce the mental effort required to make decisions and they make the quick and easy decision that most people make.

Critical thinking also has a big role in myths. Critical thinking is when you make reasoned, logical judgments that you have thought out. You can involve critical thinking in this myth by asking relevant questions about the myth. Is the myth still true? I do not believe the myth was ever true, it just happened to begin because of an assumption made by an early scientist. What facts support it? There are not many facts to support the myth. These questions along with many more can be used to involve critical thinking into the myth.

When the myth began to come out, everyone believed it, but in the past few years many more articles have come about that have debunked the myth. In an article by Barry L. Beyerstein, he states that if a person loses way less than 90% of their brain in an accident it usually has catastrophic consequences. In another article by Benjamin Bradford, he says, that through fMRIs and PET scans, we have been able to see the brain’s activity and it shows that we use most of our brain. In many other articles, it has been stated that the 10% myth does not have enough evidence to support it, but more evidence that would go against it. In Robynne Boyd’s article, it states that John Henley said that over the course of a whole day we would use all 100 percent of our brain. This is correct because throughout the day we do so many tasks that activate different parts of our brains. Researching drinking coffee, writing a paper, and walking outside all activate a different part of your brain to help you complete these activities.

The 10% myth has been around for many years because the media continuously brings it up, with different movies and books inspired by the myth. Through critical thinking, we can ask questions about the myth and choose the thought-out answer over the quick and easy one that we would have chosen with confirmation bias. Critical thinking can also be used in many job fields, not just psychology.

Discussing the Role of Clinical Biases in Diagnosis

The following essay will attempt to offer a considered and balanced review of the role of clinical biases in diagnosis. Clinical diagnosis refers to a process that matches an individual’s specific symptoms to those that define a particular mental disorder. Clinical biases refers to behaviours that psychologist unconsciously have, these may be both beneficial and dangerous. Biases occur when researchers experience preconceived ideas about the likelihood of a disorder, based on the social and cultural background of the patient.

Most psychological disorders are diagnosed on the basis of clinical symptoms. Psychological disorders though, are highly subjective to biases of the psychiatrist and thus may result in a problem with the diagnosis. Clinical biases may result because of diverse reasons. Cross- cultural variations for example. Different cultures may consider different behaviours as normal or “abnormal” therefore a psychologist may interpret behaviours differently based on the cultural background they grew up with. Duncan Double (2006) argues the frequent occurrence of over diagnosis in diagnosing psychological disorders. Thus occurs when patients suggest themselves a disorder which they might suffer… The doctor, lacking evidence for a diagnosis ‘fulfils’ the patient’s want by over-diagnosing. This diagnostic bias tends to happen when the symptoms in the patients are unexplained. Another known clinical bias is the confirmation bias. Thus occurs when a psychologist seeks, interprets and remembers information so that it confirms their preconceptions. This leads the researcher to weight more evidence that supports his/hers hypothesis and disregard evidence that denies it.

The study conducted by Mendel et al in 2011 aimed to see if psychologists would show confirmation bias when making a diagnosis. The researcher used purposive sampling and tested 75 psychiatrists and 75 medical students. All participants were given a summary of a case study that focused on an elderly man who suffered from Alzheimers. Though the man originally suffered from Alzheimer’s disease, the summary of the case study made it seem as most probable that he suffered from depression. After reading the case study, all participants were allowed to conduct further research. Results showed that 13% of the psychiatrists and 25% of the medical students misdiagnosed the individual due to the preconception they had on the diagnosis. This study concluded that clinicians are subjected to confirmation biases and therefore their diagnosis may be influenced by their conceptions of an individuals attitudes. The study conducted by Mendel provides insightful knowledge on regards to studying social groups and the study was able to analyse participants behaviour without them knowing it was an investigation (reducing participant bias). On the other hand, the study conducted by Mendel shows some limitation as confirmation bias is a very subjective thing, that changes from person to person, affecting people in a different manner. Thus would affect the generalisability of the data gathered from the investigation. Clinicians who diagnosed the elderly man with depression suggested and prescribed wrong treatments endangering the individual, therefore this study shows how confirmation bias can lead to harmful and dangerous diagnosis.

The study conducted by Rosenhan et al in 1973, aimed to challenge the validity and the reliability of clinical diagnosis and to investigate the effects of patients labelling their own disorders. Again, purposive sampling was used in this investigation as the researcher specifically selected 5 males and 3 female figures. All 8 participants tried being admitted into the hospital’s psych ward. Participants called the hospital and set up diagnosis appointments. They were all admitted into the psychiatric ward with diagnosed schizophrenia. When admitted into the ward, participants stopped ‘manifesting’ symptoms and acted ordinarily. All participants took notes regarding the way people treated them. The study concluded that no staff member ever suspected of their sanity, although other patients in the ward did. The staff members also labeled the notes which the participants had taken as an “abnormal” behaviour. The study conducted by Rosenhan shows over diagnosis of the psychiatrists. The study has numerous strengths as it clearly demonstrates how patients ideals may influence clinicians and their diagnosis, however again, it shows some limitation as participants were not kept in a stable/safe environment. Participants in fact, after the experiment, had trouble convincing psychiatrists that it was an investigation and that they were mentally “sane.” The study conducted by Rosehan shows that everyone has a different vision regarding what is to be considered sane and what is to be considered insane. The behaviour of the participants was misunderstood and misinterpreted and thus led to patients being hospitalised being powerless, segregated and self-labeled. This was counter-therapeutic. The study conducted by Rosehan shows that clinical bias may be influenced by individuals and may lead to fallacies in diagnosis which cause dangerous treatment.

Overall, though both the studies conducted by Mendel and Rosehahn show some limitations, they provide more strengths and insight into how clinical biases may affect diagnosis. Mendel’s study shows confirmation bias, were the researcher seeks to prove his original hypothesis in the diagnosis. Reshahn’s study shows over diagnosis, were the patients self – labeled disorder ‘manipulates’ the psychologists thoughts leading him to a bias diagnosis. As shown through both studies, clinical biases are extremely dangerous as they can lead to wrong treatments and prescriptions that can be fatal.

Reflective Essay on Biases in My Life: Analysis of Confirmation Bias

I choose two biases to investigate and explain how these affect on my life. People read articles daily, but they are doing not recognize that bias is getting used to alternate their opinion. “I’ll believe it once I see it.” this is often what the general public says once they have a special opinion on topics. However, changing opinions aren’t as easy because it seems. Opinions and perspectives are ever-changing solely supported by a person’s thinking. to vary a person’s opinion takes a considerable amount of effort and evidence. Humans don’t wish to be wrong; we wish to be right. It makes us feel triumphant and confident, whereas being wrong makes us feel defeated or embarrassed. The opinions we first formulate once we see something is just supported our judgment or first impression. We get defensive when people ail our opinions, so we gather counterarguments to prove our stance. once we look for evidence to substantiate our beliefs or opinions, it’s called confirmation bias. Confirmation bias varies from different situations because it may be our fancy, or it may be our firm beliefs. We feel inclined to uphold our opinions and to appear for more information that confirms our beliefs. Confirmation bias may be applied in way of life, like workplace settings. Disputes can happen thanks to clashing opinions. this could cause tension and spitefulness within the workplace environment There are many everyday samples of people using confirmation bias behavior. A student doing research on only 1 side to an argument for a paper to substantiate their thesis may fail to completely search the subject for information that’s inconsistent with what they’re writing about. i think confirmation bias happens to our life almost every day .for example, My mother came to go to me last week and he or she called attention to a confirmation bias that I’ve got been observing for years but haven’t took the time to note it. We were walking out the outside door to depart for dinner downtown, and my mother looked outside and noticed the heavy rain falling. She stopped me and said that if I failed to bring a raincoat and hat, I might catch a chilly without a doubt. I’ve got heard that phrase 1,000,000 times before but had never viewed the statement as a confirmation bias. I had always attributed the sayings to old urban legends that folks wont to scare their children into doing what they’re told to try to do. After catching some colds after playing within the rain, I had only supported my mother’s claims. Nevertheless, this was a transparent example of a confirmation bias. within the example of catching a chilly within the rain, my mother attempted to appear for evidence that was in keeping with a previous hypothesis. The prior hypothesis during this situation was that you just are absolute to catch a chilly if you went outside without a raincoat. By gazing at research that confirms her opinion, my mother exhibits the confirmation bias in her decision-making process. during this example catching a chilly is that the previously held conviction and getting wet is the confirming information.

Implicit bias is unconscious preferences or attitudes towards or against certain groups of individuals. Since implicit bias is knowledge, it’s difficult to work out what implicit biases people hold. Project implicit may be a program dedicated to evaluating these implicit biases. The way that this group attempts to assess hidden biases that individuals hold is thru something called the implicit association test. Test takers are put through some tests where a word or picture is shown on the screen and they must press either “e” or “I” to sort the word into one in all two categories. By tasking the person with completing this as fast as possible in order that they’re not brooding about their responses, it’s assumed that they’ll pick the “easier” response more quickly than another response, which can therefore show how they subconsciously feel about, or towards, a particular group of individuals. The test gives a rating of what quantity bias someone has towards or against a gaggle of individuals with the labels slight, moderate, or strong. By having the subconscious take over the decision-making responsibilities, the implicit bias of someone may be seen. Biases that individuals hold subconsciously can cause them to inadvertently treat others during a negative way, whether or not they are doing not notice that they’re. Since they’re not completely conscious of thoughts and actions, people aren’t able to censor themselves and proper this inappropriate behavior.

Reasons to Believe: Argumentative Essay on Having Confirmation Bias

Joseph Heller once said, “The truth is whatever people will believe is the truth. Don’t you know history?” Long gone are the days of being able to turn on any news channel that only gives facts. Today you must check your sources carefully and question everything. It is important we do this because fake news can be seen everywhere, and it is spread because our bias makes us more gullible.

Everyone all wants to believe that we exist innocently without biases or prejudiced beliefs. Although that want is far out of our reach, especially because there are characteristics about ourselves that we don’t see as having bias. Confirmation bias in its own can be more of a preference than a bias, in spite of them being one in the same. Confirmation bias is a type of bias in which someone favors evidence that confirms their own beliefs (Rasmussen). Rasmussen states that we naturally search for answers by asking questions that lead to answers that we want to hear. Confirmation bias affects our way of learning, understanding, and thinking about the world by leading us to think in a bias way. Confirmation bias plays a role in how gullible a person can be because they let their opinions keep them from questioning information (Feehery). We see confirmation bias leading to gullibility every day, especially in politics. On both sides of the political spectrum, you will see that people of all opinions read what confirms their own opinions. An article/documentary released by CNN showed that knowledge of other political party’s biases led to a wide spread of fake news around the time of the 2016 election (Davey-Attlee). Scam websites and Facebook pages were being used by to false information about both political parties purely for the profit they received for getting clicked on. One of the first people to start profiting from false information accounts is a man called Mikhail. He claimed to have made nearly $2,500 a day from all the times that people clicked to view a fake news article on Bernie Sanders or an article praising Donald Trump (Davey-Attlee). Mikhail’s profit is proof that many people let their confirmation bias lead them to read and believe an article that agreed or confirmed an opinion they were seeking to confirm. If we take a page from Destin Sandlin’s backward bicycle video, we will see that the idea of just “getting over” confirmation bias isn’t easy (“The Backwards Brian Bicycle”). People believe fake news because although they are capable of getting over their bias and when they do, they aren’t willing to put in what it takes to keep themselves from reverting back to their old beliefs.

Confirmation bias and gullibility go hand and hand when discussing why we fall for fake news. It’s because of the confirmation bias that we are gullible. No one wants to question information that we want to know. That is why it has become more common that we believe every bit of information that is handed to us. For example, when a rumor goes around we listen to it then tell our friends, instead of asking if it true or not. Just because it is fake news does not mean it always has to be so negative and bad. When the viral, “Pig saving drowning goat” video went viral it was met with a positive reaction all over the world. Yet, when Comedy Central’s “Nathan For You” by Nathan Fielder released the video showing how fake it was everyone was shocked. In this video Nathan, the one who put everything together had a plan to make Glen Petting Zoo a must-see. His plan was to stage a video of a pig saving a baby goat from drowning. He put a whole team together to make the whole thing seem so realist. In an article done titled “Dull Minds Are Gullible to Fake News.” by Tom Jacobs, he expresses what drives a person to believe something even it was proven wrong. He states it is “tribalism” (Jacobs). Tribalism is when someone or something belongs to a tribe or tribe. He gives an example of emotional incentive on why we believe it if a piece of information reflects badly on the other side—say, the ‘discovery’ that Barack Obama was born in Kenya (Jacobs). It points to a widespread development that leaves someone notably susceptible to misinformation—one which will be found among individuals of all races, nationalities, and political parties, this is usually called stupidity. “The ‘lingering influence’ of fake news ‘is dependent on an individual’s level of cognitive ability,’ psychologists Jonas De Keersmaecker and Arne Roets of Ghent University write in the journal Intelligence.” (Jacobs). What Jacobs is trying to explain is people with larger psychological features skills will do better than those with skills that are less advanced. He states they have trouble switching it. Jacobs explains that Keersmaecker and Roets did an experiment featuring 390 people they found online. In this research half of them scan an outline of a woman named Nathalie, a married nurse who worked in a hospital. Then they shared their general impressions of her, estimating her level of such qualities as warmth, trustiness, and sincerity. The other half scan a lengthier version of the minibiography. It expressed that Nathalie was caught stealing medication from the hospital, which she then sold out in order to afford designer items. They then wrote out the same scales. Afterward, they ‘saw an explicit message on their screen stating that the data concerning the stealing and dealing of medicine wasn’t true.’ They then read an amended version of the aforementioned description, and once more expressed their feelings toward the nurse. All participants stuffed out 2 questionnaires designed to spot psychological traits that are known with a reluctance to alter one’s mind. One measured right despotism, the opposite ‘need for closure,’ a.k.a. a scarcity of comfort with ambiguity. After taking all that into consideration the researchers stated “the false information effects never completely wore off in individuals with lower levels of cognitive ability.”, meaning once a person hears fake news, or anything not true, it becomes hard for them to push that to the side. As Jacobs writes as the closing, “That should provide a strong incentive to news organizations to get it right the first time. Unfortunately, it also gives unscrupulous politicians license to lie.”

In the end, fake news can be seen everywhere from printed news, broadcast news, social media, the news, and even your acquaintances. Everyone would like to believe we do not have bias or prejudiced beliefs, but in reality, we do. Alongside bias and prejudiced beliefs, we are also very gullible. Despite all this, we can change. Having an open and creative mind can help us in the long run. As stated before, we search for answers by asking questions that lead to answers that we want to hear. If we had more of an open mind it could lead us to a bias-free life. While wanting to have a gullible-free life we have problems like no one wanted to question information that we want to know. This is why it has become more common that we believe every bit of information that is handed to us. To help with this problem we need to have more of a creative mind. As it turns out change is not hard. We just have be open to change, more self-aware, and more questioning. Once we make these changes fake news won’t be a problem anymore.

Work Cited

  1. “The Backwards Brain Bicycle.” Performance by Destin Sandlin, YouTube, YouTube,5 Aug. 2015, www.youtube.com/watch?v=Ybo4Lk3CI98.
  2. Braucher, David. “Fake News: Why We Fall For It.” Psychology Today, Sussex Publishers, 2016, www.psychologytoday.com/us/blog/contemporary-psychoanalysis-in-action/201612/fake-news-why-we-fall-it.
  3. Davey-Attlee, Florence, and Isa Soares. “The Fake News Machine: Inside a Town Gearing up for 2020.” CNNMoney, Cable News Network, 2017,www.money.cnn.com/interactive/media/the-macedonia-story/.
  4. Feehery, John. “Feehery: Confirmation Bias on Trump.” TheHill, 19 Dec. 2017, www.thehill.com/opinion/white-house/365522-feehery-confirmation-bias-on-trump.
  5. Jacobs, Tom. “Dull Minds Are Gullible to Fake News.” Pacific Standard, 13 Dec. 2017, www.psmag.com/news/dull-minds-are-gullible-to-fake-news.
  6. Mosbergen, Dominique. “Remember That Baby Goat-Saving Pig? It Was A Hoax.” HuffPost, HuffPost, 7 Dec. 2017, www.huffpost.com/entry/pig-rescues-baby-goat-hoax-nathan-for-you_n_2767409.
  7. “Nathan For You – Petting Zoo Hero.” Performance by Nathan Fielder, YouTube, YouTube, 26 Feb. 2013, www.youtube.com/watch?v=_2My_HOP-bw.
  8. Nepstad, Opinion by Daniel. “The Myths and the Truth about the Fires in the Amazon.” CNN, Cable News Network, 5 Sept. 2019, www.cnn.com/2019/09/05/opinions/amazon-fires-myths-and-truth-opinion-nepstad/index.html.
  9. Rasmussen, Louise, et al. “Confirmation Bias: 3 Effective (and 3 Ineffective) Cures.” Global Cognition, 12 Oct. 2018, www.globalcognition.org/confirmation-bias-3-cures/.
  10. Robson, David. “Why Are People so Incredibly Gullible?” BBC Future, BBC, 24 Mar. 2016, www.bbc.com/future/article/20160323-why-are-people-so-incredibly-gullible.

Confirmation Bias and Overconfidence Bias: Analytical Essay

Introduction

We have come a long way in terms of analyzing the financial markets. The evolution of theories pertaining to the markets dates back to the 17th century where the famous tulip mania term was coined when a bubble in an economy was first identified. And now in the 21st century, we analyze financial markets through the lens of advanced behavioral finance theories.

Going back to the 1970s, the efficient markets hypothesis (EMH) was at the height of its dominance and was assumed to be proven beyond doubt. The idea emerging was that speculative asset prices such as stock prices always incorporate the best information about fundamental values and that prices change only because of good, sensible information meshed very well with theoretical trends of the time. Shiller (2003).

However, Eugeme Fama (1970) mentioned in his report that market efficiency theory has some anomalies like serial dependencies in stock market returns. In the 1980s, the consistency of the efficient market hypothesis was first tested against the light of excess volatility. But the stock prices had more volatility than an efficient markets hypothesis could explain, which questions the basic underpinnings of the entire hypothesis as most of the volatility was unexplained.

In the 1990s, the spotlight moved away from econometric analysis toward the human psychology models related to financial markets or so-called behavioral finance. According to Shiller (2003), Feedback models, one of the oldest theories about financial markets translated as; When speculative prices go up, creating successes for some investors, this may attract public attention, promote word-of-mouth enthusiasm, and heighten expectations for further price increases. The feedback may be an essential source of much of the apparently inexplicable randomness that we see in financial market prices.

The efficient markets theory, as it is commonly expressed, asserts that when irrational optimists buy a stock, smart money sells, and when irrational pessimists sell a stock, smart money buys, thereby eliminating the effect of the irrational traders on market price. But finance theory does not necessarily imply that smart money succeeds in fully offsetting the impact of ordinary investors. In recent years, research in behavioral finance has shed some important light on the implications of the presence of these two classes of investors for theory and also on some characteristics of the people in the two classes.

Of the various biases studied under behavioral finance, the three biases that we have focused on are anchoring bias, confirmation bias, and overconfidence bias. We have analyzed 4 papers, each covering different biases in-depth, with the last one examining two of the biases. After examining the findings of the research papers, we conclude by commenting on the intertwined relationship of the biases and how we can resolve them.

Anchoring bias

Of the several systematic biases noted by Tversky and Kahneman (1974) causing large and predictable forecast errors, anchoring bias is one of them, wherein there is a tendency to attach our thoughts to a reference point – even if there is no logical relevance to the same. They define anchoring to occur when “people make estimates by starting from an initial value that is adjusted to yield the final answer… adjustments are typically insufficient… different starting points yield different estimates, which are biased towards the initial values”

One of the experiments Tversky and Kahneman employed involved half of the subjects estimating the value of 1x2x3x4x5x6x7x8 and the other half estimating 8x7x6x5x4x3x2x1 within 5 seconds. The average answers from the 2 groups were 512 and 2250 respectively. In this case, it can be inferred that the starting number of the series was a reference point and had a significant impact on the estimate even though the answer is the same in both cases. While making economic forecasts, the tendency to underweight recent information can cause forecast errors.

In their paper, ‘Anchoring bias in consensus forecasts and its effect on market prices, Sean D. Campbell and Steven A. Sharpe investigate surveys done by Money Market Services (MMS), which collect expert consensus forecasts between 1991 and 2006 and are widely used. Then, the focus turns toward monthly macroeconomic data releases, which are previously shown to have a significant impact on market interest rates.

Keeping in line with early studies on anchoring bias, macroeconomic forecasts have been tested to show if they have the properties of rational expectations. This has been done, by running a regression of actual values of data releases (dependent variable) on the recent forecast (independent variable). The researchers run an alternative of this basic rationality test, using “surprise” (difference of actual and forecast) values regressed against the previous month’s forecasts.

The equation is revised further and finally, we check if surprise can be regressed meaningfully upon the lagged values of the difference of forecast and the average values of past “h’’ months. If the slope coefficient of the independent variable is > 1, it would imply that the forecasts are systematically biased towards the lagged values.

Earlier studies have shown the substantial impact of news data releases on financial market prices. The additional study being done here is to check whether market prices react more to the predictable component or the residual component of the surprise. The predictable component has been calculated as the forecasted component of surprise as given by the OLS technique. If the predictable and residual components of surprise are equal, this means the market can see through the anchoring bias and the impact should be lower to that extent.

The 2-year and 10-year Treasury yields have been taken as a proxy for prices. The data releases which have previously shown statistically significant impact on prices (Consumer Confidence, Consumer Price Index (CPI), Core CPI, Durable Goods Orders, Industrial Production, ISM Manufacturing Index, New Home Sales, Retail Sales, Retail Sales (ex-auto)) have been included as independent variables.

The findings in the MMS survey include mean surprise being unconditionally unbiased, the standard deviation of forecasts being lower than that of actuals, and a statistically significant degree of negative serial correlation for surprise values. This suggests the anchoring of forecasts on the most recent value of the lagged variable.

Regarding the data releases, the interest rate reaction has been taken as the difference between the quote 5 minutes before the release and 10 minutes after the release. The important finding here is that the degree of serial correlation is statistically significant for 3 of the 10 releases, indicating that the interest rate responses might be partly predictable. Also, the one-month anchoring along with the three-month anchoring model shows a significantly positive coefficient for 8 out of 10 releases. This is true for time-series data as well, with the degree of persistence high.

Not only is anchoring widely prevalent, the size is also substantial. For instance, forecasters put 40% weight on one month-lagged value and 60% on expected value for Consumer Confidence data release.

Other major finding is that the unexpected component of surprise on yields is significant for 9 out of 10 releases but the market participants do not react significantly to predictable components. Thus, some of them do not take such forecasts at face value but discard the anchoring bias estimate from the forecast.

Still, it is inappropriate to judge forecasters as irrational as there are other considerations such as faulty incentive structure which prefers common view to own private signal or issuing more conservative forecasts than warranted to minimize chances of being wrong. It is also possible that the forecaster may be viewing a forecast as a part of the underlying trend which is reflected in a lower standard deviation (smoothening).

Confirmation bias

Confirmation bias, a term coined by Peter Wason in 1960 is the tendency of seeking or interpreting facts partially to existing beliefs and one’s own expectations while testing a hypothesis.

In the paper, ‘Confirmation Bias: A Ubiquitous Phenomenon in Many Guises’, Raymond S. Nickerson has firstly reviewed experimental evidence of confirmation bias. Secondly, real-life practical examples of confirmation biases have been laid out. The third section notes different possible explanations of the bias proposed by researchers over the years. And in the fourth section, he has addressed the question of the effects of confirmation bias and its underlying utility.

1. Experimental Evidence:

Empirical evidence suggests that Confirmation bias is strong and extensive and is present in various forms. The results of the experiments also suggest that once a stand has been taken by a person, defending that stand becomes of primary importance to the person who made the decision. Despite one having weighed both sides evenly in the first place, this phenomenon is still present when one is provided with more facts to test the hypothesis

Hypothesis-Determined Information Seeking and Interpretation:

One fails to seek information that would tend to counter the initial stand and thereby fails to consider the alternate hypothesis in the Bayesian framework leading to inaccurate calculation of likelihood ratios which represents the ratio of two conditional probabilities testing the truthfulness of each hypothesis in a given scenario. (Doherty, Mynatt, Tweney, & Schiavo, 1979; Griffin & Tversky, 1992)

Looking only or primarily for positive cases despite not having vested interests:

One would often look for examples that would be classified as illustrations of the sought-for concept if the hypothesis to be tested were correct. Studies demonstrating selective testing done by Wason (1960) based on the selection of triplets wherein People typically tested hypothesized rules by producing only triplets that were consistent with them and thereby precluded themselves from discovering that they were inaccurate in choosing the test cases.

Overweighting positive confirmatory instances:

People tend to overweight positive confirmatory response and underweight negative confirmatory response. This asymmetry is because of the confidence one has in its initial preference. The need for accuracy as one important determinant of hypothesis-evaluating behavior along with several motivational factors such as self-esteem, control, and cognitive consistency.

One may associate confirmation bias with the perseverance of false beliefs. However, it is independent of the truthfulness of the falsity of the underlying belief. People also tend to express a higher degree of confidence than is rationally acceptable on the views initially formed. Being forced to evaluate the contradictory viewpoint wherein reasons for them are asked to think of, has reduced overconfidence in some instances. Another reason why people tend to be overconfident of their knowledge is that once a person fixes his mind on one alternative, he is more inclined to think for the reasons behind it and fail to think about possible alternatives.

2. The Confirmation Bias in Real-World Context

Policy Rationalization:

Once a policy has been adopted and implemented in any country, all efforts are made to defend them rather than analyzing any other possible alternatives. This same reason kept the US engaged in war with Vietnam for more than 17 years.

Medicine:

Knowledge was a barrier to knowledge. For many years commonly accepted principles in medicines that were formed due to mere observations were accepted without fully analyzing and testing the. Modern science has brought changes in a lot of practices based on hypothesis testing in absence of any confirmatory bias.

Judicial Reasoning:

Jurors are expected to restrain from forming any sorts of judgments and maintain open minds during the deliberation phase which otherwise could lead to them forming an early opinion and later leading to focusing only on the facts that support their earlier claims leading to confirmatory bias. This ensures that they evaluate each fact and evidence carefully without any preconceived notions.

Science:

Both the common man and the scientists have their own theories based on observations and they are both tending to fall prey to confirmation bias by paying attention to facts that could agree or not with their own conjecture. For years scientists have not accepted each other’s theories and have tried their best to come up with an alternative explanations to various things in nature. Galileo did not accept Kepler’s hypothesis of the moon being responsible for tidal currents. Newton rejected the theory that could be over 6,000 years old. Huygens and Leibniz rejected Newton’s concept of universal gravity.

3. Explanations of the Confirmation Bias

Several reasons can be attributed to explain the existence of confirmation bias such as ego, Cognitive limitations, lack of understanding of logic, or some fundamental value. However, they can be broadly explained using the following theories as described in the paper

Desire to believe:

Researchers have observed that people find it easier to believe propositions they would like to be positive/truth than propositions they would like to be negative/lie. There is a positive correlation between what one would consider being true and what is desired. And these beliefs are created basis the preferences people have developed over a period. The fact that we normally discount facts that count our belief is witness to the importance we put on them. Motivation (a strong wish to confirm) will lead to confirmation bias, but cognitive determinants such as our beliefs determine its magnitude.

Information-Processing Bases for Confirmation Bias:

People are fundamentally constrained by their minds in a way that only one hypothesis or only one side occupies the mind and the mind is incapable of processing its alternatives leading to an information processing base for confirmation bias.

Positive-Test Strategy or Positivity Bias or Congruence heuristic:

In absence of any other evidence, one is more inclined toward finding evidence that may prove the hypothesis to be true rather than false. One is less likely to attempt in rejecting the hypothesis and considering them as false than attempting to prove the hypothesis as truth.

And generally, from the hypothesis statement, it is easy to make out which statement is positive, and which one is negative, and one is normally more inclined toward proving the positive statement true.

Conditional Reference Frames:

When one is asked to reason as to why a hypothesis may be true, they are already somewhat convinced that the hypothesis is true Koehler found out that finding a reason is not of importance in this case and simply by arriving at a focal hypothesis one can be convinced that the hypothesis is in fact true. Calling attention to a desired hypothesis leads to one believing it to the focal hypothesis which in turn leads to the acknowledgment of the conditional reference frame wherein the focal hypothesis is accepted to be true.

Pragmatism and Error Avoidance:

When the consequences of treating a true hypothesis to be false are far more than that of treating a false hypothesis to be true it may lead to confirmation bias dictated by some normative models of reasoning and common sense. In this case, people may be more inclined towards seeking small rewards rather than having to go through a costly consequence but not on the pure objective of analyzing and testing the pure hypothesis.

Educational Effects:

At every level of education, importance is placed on having reasons for what we believe in. If one is compulsorily forced to practice finding reasons for one’s own belief rather than finding alternatives to our beliefs, then one is ought to hard wire confirmatory bias in his or her life. A standard method for teaching places emphasis on giving supporting evidence to strengthen our beliefs rather then countering it or presenting alternating beliefs that may be possible.

4. Utility of Confirmation Bias

Utility in Science-principle of fallibility:

Hypothesis are made stronger when highly competent scientists attempt to prove them wrong and fail at it rather than some moderate scientists trying to prove the hypothesis to be correct. To the extent that such a belief is accurate, an attempt should be made to test any new scientific theory by trying to negate it and proving it wrong. Secondly, it may be possible to hold a belief for justified reasons without being able to produce concrete evidence for the same. Also, in certain cases conservatism may lead to confirmation bias and a lot of them have led to scientific discoveries.

Focused and single-mindedness to overcome conflicts-

Although vague this theory aims to suggest that by not overanalyzing and thinking about infinite alternatives to a hypothesis it is more beneficial in certain cases to have a confirmatory bias and satisfy one’s own ego. Precisely these qualities permitted the 17th-century New England Puritans to establish a society with the ingredients necessary for survival and prosperity which might have otherwise they might have fallen to the unpredictable perils of the wilderness.

Overconfidence bias

Overconfidence typically refers to an irrational and exaggerated belief in one’s abilities with respect to successfully solving any problem/task. In a finance context, it is used for forecasting events.

Overconfidence bias leads to the false assumption that someone is better than others, due to their own false sense of skill, talent, or self-belief.

Understanding where the direction of market fluctuation is one of the most important skills in finance and investing. In this industry, most professionals think they are above average when logically it is impossible.

In the publication ‘Self-Serving Attribution Bias, Overconfidence, and the Issuance of Management Forecasts’ by Robert Libby and Kristina Rennekamp, they conducted an abstract experiment and a survey of experienced financial managers in terms of issuing earnings forecasts.

The researchers primarily wanted to examine whether the managers engage in self-serving attribution which in turn increases overconfidence. Self-serving attribution is defined as a tendency to attribute positive outcomes to their own internal skill-set and negative outcomes to external factors.

The research paper focused on the hypothesized causal relationships to arrive at the conclusion. Following were the hypotheses:

  1. Higher ratings of first-round performance will be associated with self-serving attribution
  2. Self-serving attribution resulting from favorable perceptions of first-round performance increases confidence that second-round performance will exceed first-round performance
  3. Participants who are higher in stable individual measures of overconfidence will also be more confident that their second-round performance will exceed first-round performance
  4. Participants that are more confident in their second-round performance will be more likely to commit to doing better in the second round than in the first round

The experiment involved 57 participants who were doing MBA from Cornell University. The experiment was designed as a two-round test consisting of trivia of 25 questions of mixed difficulty levels. There were two conditions in the tests; a low-difficulty scenario consisting of 15 easy questions, 5 moderate, and 5 hard questions. Whereas a high-difficulty scenario consists of 5 easy questions, 5 moderate, and 15 hard questions. The participants were incentivized with $2 if they answered correctly. After 1st round, they were shown how many they answered correctly but they never knew the difficulty level. After this, they were asked to estimate the difficulty levels of the questions. Further, they were asked to gauge their performance in terms of internal and external factors. The internal factors were “skill” and “effort” whereas external factors were “luck” and “difficulty”. The result of the experiment was such that those who performed bad they attributed it to the external factors whereas those who performed well they attributed it to the internal factors.

The experiment was taken into the next stage where a confidence trait was developed among the high performers to perform well in the future round as well. For this round, the participants knew the difficulty mix in advance. Although the difficulty level was the same in both rounds, participants never knew about it since they were unaware of the difficulty level in the 1st round. They were told that will be incentivized with $2 per correct question before committing for the test. Then their confidence level was tested in terms of appearing for the 2nd round. After that were told that if they commit to improving their performance in the 2nd round with respect to the 1st round then they will gain $2.5 for the correct answer or lose $1.5 for the wrong answer. The last part of the experiment involved a true test and without any manipulation to gauge the overconfidence trait in the managers when they were asked to collect their payments a week later. The whole experiment was designed in such a way that it captures the variations in the real world that managers always face. In a nutshell, the experiment gauges the self-attribution, and overconfidence levels throughout the test which involved forecasting their performance.

The results of the experiments proved the hypotheses that:

  1. Those who got a low-difficulty set scored higher than the higher-difficulty guys in both rounds.
  2. Those who scored well attributed their performance to internal factors and those who performed poorly attributed to external factors.
  3. After 1st round, those who did well believed that it was because of internal factors and they are less likely to improve their 2nd round performance
  4. It was also observed that those who had stable confidence levels, they also expected that they will do better in the 2nd round.
  5. Those who were overconfident and committed that they will do better as compared to those who didn’t commit, actually didn’t have much difference in terms of their payouts and in fact made optimistic forecasting errors.

The primary evidence was that managerial overconfidence has contributed in making that optimistic forecasting decision. The experiment also shows that participants engage in self-serving attribution, giving greater weight to internal than external factors as explanations for good performance. This in turn increased confidence and expectations of improved future performance in the managers but which resulted in more forecasting errors because of overconfidence and self-serving attribution bias.

Confirmation bias and Overconfidence bias

Let us look into the above-combined effect with the help the research paper on ‘Information Valuation and Confirmation Bias in Virtual Communities: Evidence from Stock Message Boards’ by JaeHong Park.

As part of this research paper, a study was carried out on a message board website to test the visitors for confirmation biases and their inability to rationally evaluate the new information they come across.

As part of this study, they had evaluated five hypotheses namely

  • a) Subjects with stronger initial beliefs about future performances of stock were more likely to display bias
  • b) Subjects with higher perceived knowledge were more likely to show confirmation bias c) Bias will be low when both initial sentiment and perceived knowledge is low
  • d) Gap between forecasted return and actual return will be wider for those who display biases
  • e) the Higher extent of bias will lead to greater trading frequency.

As a result of their testing it was revealed that all of their initial hypothesis held true at 1% significance level. Strength of belief had a positive co-efficient of 0.214, Perceived knowledge had a positive co-efficient of 0.268, Combination of both had a negative co-efficient of 0.178, Return gap had a negative co-efficient of 0.147, and trading frequency had a positive coefficient of 0.159.

As a result of this study, it was revealed that the website visitors did display confirmation bias which enhanced their overconfidence and boosted their optimism levels where they relied heavily on their expected returns. Thus, the overall perceived advantage of such message boards which was initially deemed to be a source of information for retail investors was put to question.

The confirmation bias in information valuation was an important determinant for investor overconfidence. The stronger the belief the more was the tendency to seek information confirming their beliefs and more was the degree of overconfidence.

This overconfidence underestimated the volatility of random events in financial markets which led to an increasing difference of opinions among investors and led to higher trading frequencies.

Overconfidence is reflected in the illusion of control over random events, Illusion of knowledge, and self-attribution bias. These in turn encourage investors to trading more even though this may lead to worse performances.

On account of these biases, individual investors are motivated to trade more frequently and experience anchoring to their expected returns but end up with poor performance as higher trade frequency leads to greater transaction costs.

The fundamental reason for investors to experience confirmation bias leading to all other biases is the discomfort they experience when presented with contrary information. The stronger the initial belief of future stock performance the most discomfort that an individual would experience to process contrary information.

According to self-enhancement account, people are motivated to hold positive views of themselves and their future and thus value confirmatory information more because it enhances their perceived sense of knowledge about relevant information.

Investors who perceived themselves to be knowledgeable also back their initial decision and experience discomfort to process contrary information. On a contrary a person who starts evaluation without any sentiments and lower perceived knowledge would be more interested in gathering information will strive to evaluate each piece of information rationally would fit into the Bayesian model thereby avoiding the biases trap.

Once inside the trap of biases an investor will experience overconfidence and anchor himself to his expected return as a result of which he will have increased trading frequency and realize greater deviation of returns from his initial expected returns.

Conclusion

It is important to note that the individual biases observed above do not work in insolation but in tandem, it is the magnified effect of the above biases working together which moves an Individual away from exhibiting rational decision making by using the Bayesian framework for new information which they come across.

As an application of this in everyday life, we find repeated references by market experts saying Nifty at 10,000 would be a strong support level…If Nifty breaks, 9750 would be the next strong support and so on.

People working to identify such support and resistance levels, are likely to make them as a reference point for buying and selling, in the hope that they can leverage what others think in the market.

They may disregard contrarian evidence (confirmation bias) and keep the trade open, overconfident that they would be right. This remains until there is massive volume on the other side of the trade and hence, he is left with no choice but to book a heavy loss.

The cycle continues as he searches for the next stock that recoup his losses in the earlier trade.

Thus, we see that the 3 biases we analyzed may be intertwined in many cases and one may lead to another.

To resolve the biases, having an objective approach (in the sense that opinions have to be backed by numbers) helps reducing the importance of reference points. Also, agility in reacting to new information pertaining to one’s own investments can prevent formation of reference points.

Another must-have is dissenting opinions from credible sources which helps remove any emotions associated with the investment.

Humility in understanding that even the best investors make mistakes and accepting one’s mistakes is key. Learning from our own as well as others’ mistakes can reduce overconfidence.

Bibliography

  1. Robert J. Shiller (Winter, 2003). From Efficient Markets Theory to Behavioral Finance. The Journal of Economic Perspectives
  2. Sean D. Campbell and Steven A. Sharpe (April 2009). Anchoring Bias in Consensus Forecasts and Its Effect on Market Prices. The Journal of Financial and Quantitative Analysis
  3. Raymond S. Nickerson (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology
  4. Robert Libby And Kristina Rennekamp (2011). Self-Serving Attribution Bias, Overconfidence, and the Issuance of Management Forecasts. Journal of Accounting Research
  5. JaeHong Park, P. K. (December 2013). Information Valuation and Confirmation Bias in Virtual Communities: Evidence from Stock Message Boards. Information Systems Research, 1050 – 1067.

Role of Confirmation Bias in in Diagnosing and Treating Mental Disorders

The world consists of a diversity of cultures and within these cultures, there are different definitions of what behaviour can be defined as normal or abnormal. Due to these different perspectives among different cultures, there can be a wrongful diagnosis of a patient with potential mental illnesses. These criteria, that are used to judge behaviour within each culture, are called clinical biases. Clinical bias can have an impact on the diagnosis of a patient due to the lack of knowledge of mental disorders from other cultures, cultural stereotypes, reporting bias, confirmation and symptoms that may vary across cultures. The two studies that I will be discussing, in which clinical biases are addressed, is Mendel et. al. (2011) and Naeem et. al. (2012). The study of Mendel et. al. investigated the role of confirmation bias in the process of diagnosing a patient with a mental disorder. The study of Naeem et. al. investigated the effectiveness of cognitive behaviour therapy in Pakistan because of the variety of ways that symptoms of depression can present themselves across different cultures.

Mendel et. al. focused on the influence of confirmation bias in mental health professionals. Confirmation bias is when you are more prone to search for information that confirms your beliefs or hypothesis. 75 medical students and 75 psychiatrists were assigned a decision task and found that 13% of the psychiatrists and 25% of the students showed confirmation bias when they searched for new information. This was after they already made an initial diagnosis. A large number of participants showed that they were less likely to change their initial diagnosis after searching for information that confirmed their diagnosis, than those who searched for contradictory information that challenged their first diagnosis. As a result, the participants who indicated confirmation bias misdiagnosed 70% of their patients. This led to prescribing different treatments from the participants who diagnosed their patients correctly. In conclusion, confirmation bias can lead to wrongful decisions building upon each other. Researchers or psychiatrists, in this case, should be aware of the impact of confirmation bias and search for balanced information during the diagnostic process.

The study of Naeem et. al. examined whether cognitive behaviour therapy would be effective to treat depression in Pakistan due to the variety of presentations of symptoms across cultures, specifically how depressive disorder symptoms present themselves among Pakistanis. The researchers interviewed outpatients that were diagnosed with depression at a university teaching hospital in Pakistan. The purpose of the interview was to determine the extent of the patient’s knowledge. Thus open-ended questions were asked about their general knowledge about mental illnesses, the health care system and their perceptions of the treatment they had been receiving. It resulted in the researchers noticing that the patients had very little to no knowledge about mental disorders or depression itself and were reluctant to discuss nonmedicinal treatments. This was possibly due to the difficulties of cross-cultural work, for the researchers stated that there were linguistic and interpretation issues. After an improved, second interview the researchers gathered that the Pakistani patients described their illness as a psychosocial problem that is caused by tension. Clinical bias in the form of cross-cultural variations of symptoms, for the patient has been treated with a universal treatment for their depressive disorder, namely cognitive behavioural therapy. This is not effective because of the cultural differences, therefore the treatment has been modified to fit the patient’s needs. In conclusion, the participants are more willing to accept medicinal treatments than therapy due to their perceptions of the illness. It is important for psychiatrists and doctors to know what the patient’s knowledge and perception are of the illness before diagnosing them to avoid clinical biases.

Clinical biases will always be present due to a person’s preconceived personal knowledge and beliefs. Thus researchers must be educated on how to eliminate biases as much as possible and use different techniques within an investigation. It should be a requirement to be educated on different cultures to eliminate misdiagnosed patients within a cross-cultured diagnosis. Mendel et. al. proved the severity of wrongful decisions in the diagnostic process that are caused by confirmation bias and Naeem et. al. thoroughly discussed the impact different moderations to cognitive behaviour treatment can make when people are more educated on the diverse presentation of symptoms within different cultures. When these biases are eliminated or reduced, researchers and psychiatrists are less prone to make inaccurate decisions.