Create a 15-minute PowerPoint presentations on chapter 7 Pro-information, (12 s

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Create a 15-minute PowerPoint presentations on chapter 7 Pro-information, (12 s

Create a 15-minute PowerPoint presentations on chapter 7 Pro-information, (12 slides )
Our group will present a pro-information presentation. This group will present the readings and make an argument as to why the readings support firms using information from consumers to better market products that better fulfill their needs.
CHAPTER 7
MINORITY REPORT
The Perils and Occasional Promise of Predictive Algorithms
In the 2002 movie Minority Report, Tom Cruise is on the run, under suspicion as a future murderer based on the prediction made by a trio of semi-psychic humans called “precognitives,” whose ability to predict crimes before they happen has led to a revolution in policing.1 In this future society, based on the precogs’ work, suspects are arrested before they commit crimes. The movie, like the 1956 science fiction story it’s based on, raises questions about self-determination, personhood, autonomy, and free will. For many privacy scholars, as for The Minority Report’s author, Philip K. Dick, privacy and self-determination are interwoven, parts of the same fabric of individual life.
If the story were updated to 2020, the precogs would be replaced by behavioral algorithms being run on a supercomputer somewhere—or, more likely, on an instance of one of the commercial cloud services being offered by Microsoft, Amazon, Google, and other such providers. The kinds of
artificial intelligence required to create and compute predictive algorithms would, not too long ago, have only been possible on the hardware provided by high-end, multibillion-dollar supercomputers like IBM’s Watson. Today, however, the capacity to create and run behavior-based algorithms has advanced so rapidly that behavioral prediction algorithms are in widespread use for purposes so common, like targeted advertising, that they have become practically mundane.
In Minority Report, society has become a surveillance state. Everyday activities are recorded; spider-like robots carry out ID checks by pulling back people’s eyelids and scanning their irises; every movement is tracked; and bespoke advertisements automatically populate digital billboards as prospective shoppers walk past them. All of this data is fed to the precogs, who engage in a shadowy, opaque divination process, and then spit out the results that change people’s lives, perhaps preventing crime from time to time, and most certainly leading to the arrest and incarceration of people who, in fact, have not yet done anything wrong. Through the precogs’ work, crime has been eradicated and society at large has accepted the role of widespread, continuous surveillance and behavioral prediction.
The algorithms of today threaten to do the same: to categorize us into narrow boxes in which our individual dignity is devalued, our self- determination is diminished, and, in extreme cases, our very liberty and autonomy are taken away from us.
Behavior-based algorithms are no longer simply being used to describe the past actions of individuals and groups. Now, these algorithms are forward-looking, predicting our likely next steps. And an increasing number of them are teaching themselves how to reach new conclusions. Behavior- based algorithms are being used in a variety of contexts. For example, a suicide text line uses algorithms to triage which calls are more serious and urgent. Medical researchers are using data from computer mouse tracking to predict whether a person will develop Parkinson’s disease.2 Parole boards are using algorithms as part of their decision-making process to predict which inmates are most likely to re-offend.3 And in a growing number of cases, algorithms are reaching conclusions that are tainted by the same kinds of racial and gender bias that pervade society as a whole.4
US law has very little to say, at this point, about the use of behavior- based algorithms to make or support predictions and decisions of these sorts. European law, however, has begun to address this practice, but it isn’t clear whether the regulators’ approach will have a meaningful effect on the greatest risks posed by these programs. These risks include the ways that self-teaching algorithms, or “black boxes” (more on these below), reach decisions that pigeonhole our identities and deprive us of our autonomy, stripping us of the right to be left alone today and of the opportunity for self-determination tomorrow.
HOW ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING WORK
The terms “artificial intelligence” and “machine learning” are sometimes used interchangeably, but there are important differences between them.5
Artificial intelligence (AI) can be broadly thought of as the ability of computer software to make “smart” decisions and control devices in “smart” ways. A good example is the AI system in cars that adjusts your speed when you get too close to the car in front of you. The AI isn’t sentient; your car isn’t thinking for itself. Instead, the car has a set of sensors that detect your car closing the gap with another vehicle, and your car’s AI signals the driver-assist technology to execute a set of preprogrammed responses: apply the brakes, activate a warning light, or turn off cruise control. The car is operating “smart” technology in the sense that it has detected conditions that meet a set of programmed parameters, and the car’s telematics response frees the driver from having to be the sole set of “eyes and ears” to detect a potential hazard. But the car isn’t “thinking”; it isn’t exercising independent judgment, or acting in ways that the driver can’t predict or understand. It’s following a programmed routine.
Machine learning (ML), on the other hand, involves precisely the kind of self-teaching computer programs that we don’t fully understand. At its core, ML is a subset of AI in which software programs are given data, and based on that information, the computers teach themselves. Much of ML is based on a computational field known as neural networks, computer
systems that have been trained to classify information in the same way that a human brain does.6 Like humans, neural networks take in information and
make probabilistic judgments about it. As they make those judgments, they learn from a feedback loop that tells them—either based on more data, or on input from a human—whether their probabilistic judgment was correct. Over the course of many iterations, the ML program “learns.”
In many cases, the judgments being made by these AI/ML systems are benign. They read a piece of text and gauge the mood of the person who wrote it, whether the words are intended in an angry, happy, or sarcastic tone. They “listen” to a piece of music and assess whether it’s likely to make a human listener happy or sad, and they recommend music with similar characteristics to create streaming radio stations with a coherent “vibe.”7
Machine learning programs have often been described as “black boxes,” because even the system’s own designers can’t explain precisely why the computer reached a particular conclusion.8 The designers can explain what the purpose of the particular neural network is; they can describe the information used to train the algorithm; they can describe other outputs from the computer in the past. But—just as we often don’t know why humans reach the conclusions that they do—computer scientists frequently can’t explain why an ML system reached the conclusion that it did.
Sometimes, those results are troubling, indeed.9 When an algorithm botches the auto-suggest feature on a music playlist, the worst consequence is minor annoyance. But algorithms are increasingly being used in settings that have the potential for two kinds of harms: to invade privacy and to negatively, and baselessly, impact individuals’ lives. For example, when algorithms have been used by parole boards to predict the likelihood of recidivism, the predictions have been less accurate than a purely human assessment.10 When Amazon implemented algorithms to screen job applicants, they have heavily favored white male applicants, perhaps because the algorithms were trained on a data set in which the majority of hires historically were white men.11 When the state of Arkansas began using AI to assess the number of caregiver hours Medicaid patients should be entitled to, the results from a proprietary algorithm were a drastically reduced level of care.12 When the city of Houston began using a third-party
algorithm on student data as a means of making decisions about teacher evaluations, it turned out that not a single employee in the school district could explain, or even replicate, the determinations that had been made by the algorithm.13 AI in facial recognition systems is notoriously bad at identifying non-white, non-male faces, and countless AI sets have been trained on data that, because it is reflective of historical biases, has a tendency to incorporate and perpetuate those very biases that Western democracies are trying to break free from.14 And in an oddly poignant twist, when Microsoft released an AI machine learning bot, named Tay.ai, onto the internet to test its ability to interact in human-like ways, the bot had to be taken offline within less than twenty-four hours. The bot had been so trained by the cesspool of online bad human behavior that it was spewing racist and hate-filled ideologies.15
DESPITE THE RISKS, AI ISN’T ALL BAD
DoSomething.org is a nonprofit organization that regularly sent out texts to young people who were interested in its messages encouraging action to bring about social change.16 In 2011, one of DoSomething’s communications managers received a text, out of the blue, from someone on their mailing list. The person was in crisis: they reported they had been repeatedly raped by their father, and they were afraid to call the nation’s leading hotline for sexual abuse. The DoSomething employee who received the text messages showed them to Nancy Lubin, DoSomething’s CEO. And Lubin knew she needed to do something.
Within two years, Lubin had launched the nation’s first text-only hotline, the Crisis Text Line (CTL).17 Within four months of its launch, it was providing services for individuals in crisis across all 295 telephone area codes in the United States, and by 2015, the 24/7 service was receiving 15,000 text messages per day. Many of the texts are about situations that, while painful and difficult, do not indicate the texter is in any immediate danger. But about once a day, someone sends in a text indicating that they are seriously and imminently considering suicide; this is someone who needs active, immediate intervention. One of the reasons the text line has
gained so much traction is that, according to research, people are more willing to disclose sensitive personal information via text than in person or over the phone.18
CTL’s second hire was a data scientist who approached the challenge of effective crisis message triage first by talking with volunteer crisis counselors around the country to learn from their perspective, and then collecting data and developing algorithms to generate insights from it. According to a lengthy review of CTL that was published in the New Yorker in 2015:
The organization’s quantified approach, based on five million texts, has already produced a unique collection of mental-health data. CTL has found that depression peaks at 8 P.M., anxiety at 11 P.M., self-harm at 4 A.M., and substance abuse at 5 A.M.19
The sheer volume of information available to CTL is striking in the mental health field. By way of contrast, the American Psychiatric Association’s journal, Psychiatric News, published an op-ed in 2017 calling for mental health research to carry out more big data analysis as a way to understand trends, spot patients who might be at risk, and improve delivery of care. According to the article:
When it comes to big data science in mental health care, we live in the dark ages. We have only primitive ways to identify and measure mental health care, missing out on opportunities to mine and learn from the data using strategies that can create new discoveries we have not yet imagined.20
The author argued that ethical data collection would benefit individual patients as well as overall cases, and that the primary health-care system was positioned particularly well to help facilitate those goals.
As it happens, the Crisis Text Line might be paving the way for precisely those innovations. In 2015, CTL was looking into developing predictive algorithms. By 2017, CTL had succeeded in doing just that, with some surprising results. For example, after analyzing its database of 22
million text messages, CTL discovered that the word “ibuprofen” was sixteen times more likely to predict that the person texting would need emergency services than the word “suicide.”21 A crying-face emoji was indicative of high risk, as were some nine thousand other words or word combinations that CTL’s volunteers crisis counselors could now be on the lookout for when interacting with the people texting in. CTL’s algorithmic work didn’t stop there. An article published in 2019 noted that CTL had analyzed 75 million text messages, and from its analysis had generated meaningful data about the most effective language to use in a suicide intervention conversation. Based on those findings, CTL issued updated guidance to its counselors, telling them that it was helpful to express or affirm their concern for the person but that incorporating an apologetic tone (“I hope you don’t mind if I ask, but . . .”) was less effective.22
By 2017, Facebook announced that it, too, was going to use artificial intelligence in order to assess its users’ emotional states.23 Whatever the merits of the intention behind this move, the privacy implications of Facebook’s decisions were very different from the Crisis Text Line. The CTL had both a privacy model and a practical context that protected its users’ expectations and needs. For example, a person who sent a text to CTL received a reply with a link to the privacy policy and a reminder that they could cut short the conversation at any time by typing “stop.” The person reaching out to the CTL was already in some sort of practical difficulty or emotional distress, and they had proactively reached out to CTL. In other words, they knew exactly who they were texting and why and could anticipate that their communications would be reviewed specifically with an eye toward trying to understand what kind of help they needed, and how quickly they needed it. And CTL’s review of the content of text messages served that purpose only: to provide help. This is a very different purpose than serving up targeted advertising based on the content of the users’ messages—which is the core business purpose for so many algorithms that run across digital platforms. CTL’s focus on supporting positive mental health outcomes was evident in other aspects of the organization’s structure as well, from its nonprofit status to the fact that it has a Chief Medical Officer, a Clinical Advisory Board, and a Data, Ethics, and Research Advisory Board, all composed of experts in the fields of
medicine, psychiatry, social work, data science, biomedical ethics, and related fields.24
Facebook’s foray into mental health assessments began in 2016, when it announced it was adding tools that would let a user flag messages from friends who they believed might be at risk of suicide or self-harm, teeing up the post for review by a team of Facebook employees.25 According to the New York Times, these Facebook tools marked “the biggest step by a major technology company to incorporate suicide prevention tools into its platform.”26
The Facebook model, although initially well received by the press, was almost the antithesis of CTL’s approach. Facebook’s worldwide userbase was using its platform for other purposes: to share pictures of their travels, keep in touch with their friends, advertise their business, and find news and laugh at memes. Before these new features were rolled out, Facebook’s users had no reason to expect that algorithms would run across their posts, likes, and shares with an eye toward assessing their mood or mental health. And they certainly weren’t looking to Facebook to use algorithmic conclusions to serve up tailored advertising based on the AI’s assessment of what they might need. The circumstances were ripe for misuse: it wasn’t hard to imagine people making posts, and flagging them, as pranks. And it isn’t a far stretch to imagine Facebook serving up ads to users when their mental state leaves them at their most vulnerable, enticing them to make impulse buys.
The hazards of Facebook’s mental health activities were underscored by the fact that the company had been penalized as far back as 2011 for using personal data in unfair and deceptive ways, and had more recently tested whether it could manipulate its users’ emotions—make them feel more optimistic or pessimistic—by changing the news stories that popped up in their feeds.27 When confronted with an outcry over those past abuses, Facebook announced it would provide its developers with research ethics training and that future experimental research would be reviewed by a team of in-house officials at Facebook—but that there would be no external review or external advisory body, nor any disclosure of the decision-making relating to Facebook’s use of its platform to carry out human psychological or behavioral research.28
When the company made its 2017 announcement that it was using artificial intelligence to assess whether its users were suicidal, it noted that the tools weren’t being applied to users in the European Union, as this kind of behavioral profiling would likely have been impermissible under European data privacy law.29 Although Facebook presented this effort as a way to provide a socially beneficial service to its global user base, the company did not provide details on how the AI was tested or validated, or what privacy protections were in place. It did, however, note that, in geographic areas where the tool was deployed, users didn’t have the ability to opt out.30 Further, Facebook wasn’t planning to share any results with academics or researchers who might be able to use the information to broaden understanding in the suicide prevention and crisis intervention fields.31
On the contrary, there were reports in 2017 that Facebook was showing advertisers how Facebook could help them identify and take advantage of Facebook users’ emotional states. Ads could be targeted to teenagers, for example, at times when the platform’s AI showed they were feeling “insecure,” “worthless,” and like they “need a confidence boost,” according to leaked documents based on research quietly conducted by the social network.32 Despite the lack of research ethics of privacy protections, and the apparent profit motive for this move, one survey showed that, by a margin of 56 to 44 percent, people didn’t view Facebook’s suicide-risk detection to be an invasion of privacy.33 One would have to expect that, if the same poll were done today, the results might be very different, as Facebook spent much of 2018 and 2019 fending off a series of damaging news reports and investigations by legislators and regulatory bodies around the world relating to concerns that its privacy policies were lax at best—and unconscionable and illegal at worst.
Back to the good news: the Crisis Text Line seems to offer an example of the ways in which large sets of data can be collected and analyzed for insights relating to highly personal, sensitive topics, and make valuable contributions to enhancing health and well-being, without compromising the privacy of the individuals whose data is being reviewed. As leading voices in the field of AI continue to reiterate, the goal shouldn’t be to prevent altogether the development and use of AI. Rather, the goal should
be for humans to follow a very ML-like process of learning from the feedback provided both by failed AI experiments and by their successes, and use those results to continuously improve the approach. There’s promising work underway from academic researchers and the private sector on two fronts: how to understand what happens in the black box, and what kind of AI code of ethics is needed in the design and use of machine learning systems. The EU has legislated protections against automated decision-making, and the US Congress has been holding hearings on AI, ranging from concerns over personal privacy and bias in AI to how the technology can be effectively used. Perhaps, then, if we continue this feedback and improvement loop, it will prove that we are as capable as our machines.

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount