Stratified and Cluster Sampling Based on class discussions, please answer the fo

Stratified and Cluster Sampling
Based on class discussions, please answer the fo

Stratified and Cluster Sampling
Based on class discussions, please answer the following questions with specific and illustrative examples.
Define stratified sampling and provide a unique and original example from your personal or educational experience that differs from the one discussed in class.
Define cluster sampling and provide a unique and original example from your personal or educational experience that differs from the one discussed in class.
Clearly explain the key distinctions between stratified and cluster sampling.
Deliverable: 1.5 to 2-page paper, double-spaced, with 1-inch margins, utilizing Times New Roman font.
AI Tool Usage Policy
Strict Prohibition: The use of AI tools, including but not limited to ChatGPT, is unequivocally prohibited for all assessment activities outlined in this course.
Detecting Unauthorized AI Use: As part of our commitment to maintaining academic integrity, certain assessment activities will include questions deliberately engineered by AI to yield incorrect answers. These AI-infused questions serve as litmus tests to identify any unauthorized use of AI tools. Engaging in suspicious behavior, such as submitting an answer mirroring an incorrect AI-generated response, will trigger a comprehensive investigation for potential plagiarism.
Consequences of Plagiarism: If plagiarism involving AI-generated responses is confirmed, the outcome will be an “F” grade, reflecting a breach of academic integrity.

Create a survey questionnaire for Netflix’s serial churner subscriiption issue.

Create a survey questionnaire for Netflix’s serial churner subscriiption issue.

Create a survey questionnaire for Netflix’s serial churner subscriiption issue.
The survey should consist of 20-25 questions and provide insight into the business challenge for Netflix. This may include any screening and demographic questions.
Use a variety of question types.
Include at least one question in each of the four levels of measurement outlined in the Quantitative Research information document that is attached.
Indicate any skip logic (questions to skip based on response to a previous question).
The research framework for Netflix’s serial churner problem and Quantitative Research Outline is attached for reference and background. The course book that can also be used if needed is McDaniel, C., Jr, & Gates, R. (2020). Marketing Research (12th ed.). Wiley. ISBN: 9781119716310. chap 3 and 4.

Step 1: Read the Kings Bluff Brewery Case study (attached document.) Step 2: Rea

Step 1: Read the Kings Bluff Brewery Case study (attached document.)
Step 2: Rea

Step 1: Read the Kings Bluff Brewery Case study (attached document.)
Step 2: Read the three supplemental materials (each has a link to it)
 Social Network Theory- https://medium.com/swlh/social-network-theory-a-literature-review-for-understanding-innovation-programs-7f1c214e9a77
 Uncertainty Reduction Theory- https://www.youtube.com/watch?v=j5HasECwSyc
 Uses and Gratifications Theory – https://www.communicationtheory.org/uses-and-gratification-theory/
Step 3: Develop concise, thoughtful responses to the four case questions.
Step 4: Copy and paste the four questions (below). Beneath each question, provide your single-
paragraph response. Pretend you are a consultant **in 2019**. Note: You’ll need to look at
2019 social media trends and data. Pew Research Center is a fantastic starting point.
Your work should make evident you:
1. Understand each theory. Do not copy and paste a direct quote explaining the theory.
Speak as though you are explaining the theory to a business owner in simple, actionable
terms.
2. Reviewed 2019 social media trends at Pew Research Center and made
recommendations based on THAT period in time.
3. Typed your work into Word, edited and addressed spelling and grammar issues, then
copied and pasted into the discussion board.
4. Understand how to use sources to support you recommendations. You need at least
four credible sources.
a. YES: Pew Research Center, HubSpot, Social Media Examiner.
b. NO: Random blogs, unverified online sources
Question 1: Network Selection and Management
Which social networking sites should KBB use? Who should manage them? Provide supporting rationale and research. Use superscriipt numbered citations aligned to resources list at end of response.
Question 2: Leveraging Social Networks
Through the lens of social network theory, explain why social media is a critical component of the promotional strategy for KBB, a new business seeking to build community and grow sales. Use superscriipt numbered citations aligned to resources list at end of response.
Question 3: Understanding Usage Behavior to Strategically Shape Content
What uses and gratifications may drive craft beer drinkers to follow and engage with KBB brand pages on various social networking sites? How can KBB use that information to strategically develop social media content aligned with their mission? Use superscriipt numbered citations aligned to resources list at end of response.
Question 4: Reducing Consumer Uncertainty
Think about the reasons KBB customers, both prospective and existing, may experience uncertainty. How can KBB use social media to create a sense of certainty and, ideally, move consumers toward a long-term relationship? Use superscriipt numbered citations aligned to resources list at end of response

SEE THE PDF follow the instructions Avoid Plagiarism. Support your answers wit

SEE THE PDF follow the instructions
Avoid Plagiarism.
Support your answers wit

SEE THE PDF follow the instructions
Avoid Plagiarism.
Support your answers with course material concepts from the textbook and scholarly, peer-reviewed journal articles etc.
Need references and use APA style for writing the references.

-Make sure to avoid plagiarism as much as possible . -Use font Times New Roman ,

-Make sure to avoid plagiarism as much as possible .
-Use font Times New Roman ,

-Make sure to avoid plagiarism as much as possible .
-Use font Times New Roman , 12 font sizes.
– Use 1.5 line spacing with adjust to all paragraphs ( alignment ) .
-Use the footer function to insert page number .
-Ensure that you follow the APA style in your project and references.
-No less than 600 words.
-Assignment must be in Word format only NO PDF Your file should be saved as Word doc.
-Up to 20 % of the total grade will be deducted for providing a poor structure of assignment Structure includes these elements paper style , free of spelling and grammar mistakes , referencing and word count.

Create a 15-minute PowerPoint presentations on chapter 7 Pro-information, (12 s

Create a 15-minute PowerPoint presentations on chapter 7 Pro-information, (12 s

Create a 15-minute PowerPoint presentations on chapter 7 Pro-information, (12 slides )
Our group will present a pro-information presentation. This group will present the readings and make an argument as to why the readings support firms using information from consumers to better market products that better fulfill their needs.
CHAPTER 7
MINORITY REPORT
The Perils and Occasional Promise of Predictive Algorithms
In the 2002 movie Minority Report, Tom Cruise is on the run, under suspicion as a future murderer based on the prediction made by a trio of semi-psychic humans called “precognitives,” whose ability to predict crimes before they happen has led to a revolution in policing.1 In this future society, based on the precogs’ work, suspects are arrested before they commit crimes. The movie, like the 1956 science fiction story it’s based on, raises questions about self-determination, personhood, autonomy, and free will. For many privacy scholars, as for The Minority Report’s author, Philip K. Dick, privacy and self-determination are interwoven, parts of the same fabric of individual life.
If the story were updated to 2020, the precogs would be replaced by behavioral algorithms being run on a supercomputer somewhere—or, more likely, on an instance of one of the commercial cloud services being offered by Microsoft, Amazon, Google, and other such providers. The kinds of
artificial intelligence required to create and compute predictive algorithms would, not too long ago, have only been possible on the hardware provided by high-end, multibillion-dollar supercomputers like IBM’s Watson. Today, however, the capacity to create and run behavior-based algorithms has advanced so rapidly that behavioral prediction algorithms are in widespread use for purposes so common, like targeted advertising, that they have become practically mundane.
In Minority Report, society has become a surveillance state. Everyday activities are recorded; spider-like robots carry out ID checks by pulling back people’s eyelids and scanning their irises; every movement is tracked; and bespoke advertisements automatically populate digital billboards as prospective shoppers walk past them. All of this data is fed to the precogs, who engage in a shadowy, opaque divination process, and then spit out the results that change people’s lives, perhaps preventing crime from time to time, and most certainly leading to the arrest and incarceration of people who, in fact, have not yet done anything wrong. Through the precogs’ work, crime has been eradicated and society at large has accepted the role of widespread, continuous surveillance and behavioral prediction.
The algorithms of today threaten to do the same: to categorize us into narrow boxes in which our individual dignity is devalued, our self- determination is diminished, and, in extreme cases, our very liberty and autonomy are taken away from us.
Behavior-based algorithms are no longer simply being used to describe the past actions of individuals and groups. Now, these algorithms are forward-looking, predicting our likely next steps. And an increasing number of them are teaching themselves how to reach new conclusions. Behavior- based algorithms are being used in a variety of contexts. For example, a suicide text line uses algorithms to triage which calls are more serious and urgent. Medical researchers are using data from computer mouse tracking to predict whether a person will develop Parkinson’s disease.2 Parole boards are using algorithms as part of their decision-making process to predict which inmates are most likely to re-offend.3 And in a growing number of cases, algorithms are reaching conclusions that are tainted by the same kinds of racial and gender bias that pervade society as a whole.4
US law has very little to say, at this point, about the use of behavior- based algorithms to make or support predictions and decisions of these sorts. European law, however, has begun to address this practice, but it isn’t clear whether the regulators’ approach will have a meaningful effect on the greatest risks posed by these programs. These risks include the ways that self-teaching algorithms, or “black boxes” (more on these below), reach decisions that pigeonhole our identities and deprive us of our autonomy, stripping us of the right to be left alone today and of the opportunity for self-determination tomorrow.
HOW ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING WORK
The terms “artificial intelligence” and “machine learning” are sometimes used interchangeably, but there are important differences between them.5
Artificial intelligence (AI) can be broadly thought of as the ability of computer software to make “smart” decisions and control devices in “smart” ways. A good example is the AI system in cars that adjusts your speed when you get too close to the car in front of you. The AI isn’t sentient; your car isn’t thinking for itself. Instead, the car has a set of sensors that detect your car closing the gap with another vehicle, and your car’s AI signals the driver-assist technology to execute a set of preprogrammed responses: apply the brakes, activate a warning light, or turn off cruise control. The car is operating “smart” technology in the sense that it has detected conditions that meet a set of programmed parameters, and the car’s telematics response frees the driver from having to be the sole set of “eyes and ears” to detect a potential hazard. But the car isn’t “thinking”; it isn’t exercising independent judgment, or acting in ways that the driver can’t predict or understand. It’s following a programmed routine.
Machine learning (ML), on the other hand, involves precisely the kind of self-teaching computer programs that we don’t fully understand. At its core, ML is a subset of AI in which software programs are given data, and based on that information, the computers teach themselves. Much of ML is based on a computational field known as neural networks, computer
systems that have been trained to classify information in the same way that a human brain does.6 Like humans, neural networks take in information and
make probabilistic judgments about it. As they make those judgments, they learn from a feedback loop that tells them—either based on more data, or on input from a human—whether their probabilistic judgment was correct. Over the course of many iterations, the ML program “learns.”
In many cases, the judgments being made by these AI/ML systems are benign. They read a piece of text and gauge the mood of the person who wrote it, whether the words are intended in an angry, happy, or sarcastic tone. They “listen” to a piece of music and assess whether it’s likely to make a human listener happy or sad, and they recommend music with similar characteristics to create streaming radio stations with a coherent “vibe.”7
Machine learning programs have often been described as “black boxes,” because even the system’s own designers can’t explain precisely why the computer reached a particular conclusion.8 The designers can explain what the purpose of the particular neural network is; they can describe the information used to train the algorithm; they can describe other outputs from the computer in the past. But—just as we often don’t know why humans reach the conclusions that they do—computer scientists frequently can’t explain why an ML system reached the conclusion that it did.
Sometimes, those results are troubling, indeed.9 When an algorithm botches the auto-suggest feature on a music playlist, the worst consequence is minor annoyance. But algorithms are increasingly being used in settings that have the potential for two kinds of harms: to invade privacy and to negatively, and baselessly, impact individuals’ lives. For example, when algorithms have been used by parole boards to predict the likelihood of recidivism, the predictions have been less accurate than a purely human assessment.10 When Amazon implemented algorithms to screen job applicants, they have heavily favored white male applicants, perhaps because the algorithms were trained on a data set in which the majority of hires historically were white men.11 When the state of Arkansas began using AI to assess the number of caregiver hours Medicaid patients should be entitled to, the results from a proprietary algorithm were a drastically reduced level of care.12 When the city of Houston began using a third-party
algorithm on student data as a means of making decisions about teacher evaluations, it turned out that not a single employee in the school district could explain, or even replicate, the determinations that had been made by the algorithm.13 AI in facial recognition systems is notoriously bad at identifying non-white, non-male faces, and countless AI sets have been trained on data that, because it is reflective of historical biases, has a tendency to incorporate and perpetuate those very biases that Western democracies are trying to break free from.14 And in an oddly poignant twist, when Microsoft released an AI machine learning bot, named Tay.ai, onto the internet to test its ability to interact in human-like ways, the bot had to be taken offline within less than twenty-four hours. The bot had been so trained by the cesspool of online bad human behavior that it was spewing racist and hate-filled ideologies.15
DESPITE THE RISKS, AI ISN’T ALL BAD
DoSomething.org is a nonprofit organization that regularly sent out texts to young people who were interested in its messages encouraging action to bring about social change.16 In 2011, one of DoSomething’s communications managers received a text, out of the blue, from someone on their mailing list. The person was in crisis: they reported they had been repeatedly raped by their father, and they were afraid to call the nation’s leading hotline for sexual abuse. The DoSomething employee who received the text messages showed them to Nancy Lubin, DoSomething’s CEO. And Lubin knew she needed to do something.
Within two years, Lubin had launched the nation’s first text-only hotline, the Crisis Text Line (CTL).17 Within four months of its launch, it was providing services for individuals in crisis across all 295 telephone area codes in the United States, and by 2015, the 24/7 service was receiving 15,000 text messages per day. Many of the texts are about situations that, while painful and difficult, do not indicate the texter is in any immediate danger. But about once a day, someone sends in a text indicating that they are seriously and imminently considering suicide; this is someone who needs active, immediate intervention. One of the reasons the text line has
gained so much traction is that, according to research, people are more willing to disclose sensitive personal information via text than in person or over the phone.18
CTL’s second hire was a data scientist who approached the challenge of effective crisis message triage first by talking with volunteer crisis counselors around the country to learn from their perspective, and then collecting data and developing algorithms to generate insights from it. According to a lengthy review of CTL that was published in the New Yorker in 2015:
The organization’s quantified approach, based on five million texts, has already produced a unique collection of mental-health data. CTL has found that depression peaks at 8 P.M., anxiety at 11 P.M., self-harm at 4 A.M., and substance abuse at 5 A.M.19
The sheer volume of information available to CTL is striking in the mental health field. By way of contrast, the American Psychiatric Association’s journal, Psychiatric News, published an op-ed in 2017 calling for mental health research to carry out more big data analysis as a way to understand trends, spot patients who might be at risk, and improve delivery of care. According to the article:
When it comes to big data science in mental health care, we live in the dark ages. We have only primitive ways to identify and measure mental health care, missing out on opportunities to mine and learn from the data using strategies that can create new discoveries we have not yet imagined.20
The author argued that ethical data collection would benefit individual patients as well as overall cases, and that the primary health-care system was positioned particularly well to help facilitate those goals.
As it happens, the Crisis Text Line might be paving the way for precisely those innovations. In 2015, CTL was looking into developing predictive algorithms. By 2017, CTL had succeeded in doing just that, with some surprising results. For example, after analyzing its database of 22
million text messages, CTL discovered that the word “ibuprofen” was sixteen times more likely to predict that the person texting would need emergency services than the word “suicide.”21 A crying-face emoji was indicative of high risk, as were some nine thousand other words or word combinations that CTL’s volunteers crisis counselors could now be on the lookout for when interacting with the people texting in. CTL’s algorithmic work didn’t stop there. An article published in 2019 noted that CTL had analyzed 75 million text messages, and from its analysis had generated meaningful data about the most effective language to use in a suicide intervention conversation. Based on those findings, CTL issued updated guidance to its counselors, telling them that it was helpful to express or affirm their concern for the person but that incorporating an apologetic tone (“I hope you don’t mind if I ask, but . . .”) was less effective.22
By 2017, Facebook announced that it, too, was going to use artificial intelligence in order to assess its users’ emotional states.23 Whatever the merits of the intention behind this move, the privacy implications of Facebook’s decisions were very different from the Crisis Text Line. The CTL had both a privacy model and a practical context that protected its users’ expectations and needs. For example, a person who sent a text to CTL received a reply with a link to the privacy policy and a reminder that they could cut short the conversation at any time by typing “stop.” The person reaching out to the CTL was already in some sort of practical difficulty or emotional distress, and they had proactively reached out to CTL. In other words, they knew exactly who they were texting and why and could anticipate that their communications would be reviewed specifically with an eye toward trying to understand what kind of help they needed, and how quickly they needed it. And CTL’s review of the content of text messages served that purpose only: to provide help. This is a very different purpose than serving up targeted advertising based on the content of the users’ messages—which is the core business purpose for so many algorithms that run across digital platforms. CTL’s focus on supporting positive mental health outcomes was evident in other aspects of the organization’s structure as well, from its nonprofit status to the fact that it has a Chief Medical Officer, a Clinical Advisory Board, and a Data, Ethics, and Research Advisory Board, all composed of experts in the fields of
medicine, psychiatry, social work, data science, biomedical ethics, and related fields.24
Facebook’s foray into mental health assessments began in 2016, when it announced it was adding tools that would let a user flag messages from friends who they believed might be at risk of suicide or self-harm, teeing up the post for review by a team of Facebook employees.25 According to the New York Times, these Facebook tools marked “the biggest step by a major technology company to incorporate suicide prevention tools into its platform.”26
The Facebook model, although initially well received by the press, was almost the antithesis of CTL’s approach. Facebook’s worldwide userbase was using its platform for other purposes: to share pictures of their travels, keep in touch with their friends, advertise their business, and find news and laugh at memes. Before these new features were rolled out, Facebook’s users had no reason to expect that algorithms would run across their posts, likes, and shares with an eye toward assessing their mood or mental health. And they certainly weren’t looking to Facebook to use algorithmic conclusions to serve up tailored advertising based on the AI’s assessment of what they might need. The circumstances were ripe for misuse: it wasn’t hard to imagine people making posts, and flagging them, as pranks. And it isn’t a far stretch to imagine Facebook serving up ads to users when their mental state leaves them at their most vulnerable, enticing them to make impulse buys.
The hazards of Facebook’s mental health activities were underscored by the fact that the company had been penalized as far back as 2011 for using personal data in unfair and deceptive ways, and had more recently tested whether it could manipulate its users’ emotions—make them feel more optimistic or pessimistic—by changing the news stories that popped up in their feeds.27 When confronted with an outcry over those past abuses, Facebook announced it would provide its developers with research ethics training and that future experimental research would be reviewed by a team of in-house officials at Facebook—but that there would be no external review or external advisory body, nor any disclosure of the decision-making relating to Facebook’s use of its platform to carry out human psychological or behavioral research.28
When the company made its 2017 announcement that it was using artificial intelligence to assess whether its users were suicidal, it noted that the tools weren’t being applied to users in the European Union, as this kind of behavioral profiling would likely have been impermissible under European data privacy law.29 Although Facebook presented this effort as a way to provide a socially beneficial service to its global user base, the company did not provide details on how the AI was tested or validated, or what privacy protections were in place. It did, however, note that, in geographic areas where the tool was deployed, users didn’t have the ability to opt out.30 Further, Facebook wasn’t planning to share any results with academics or researchers who might be able to use the information to broaden understanding in the suicide prevention and crisis intervention fields.31
On the contrary, there were reports in 2017 that Facebook was showing advertisers how Facebook could help them identify and take advantage of Facebook users’ emotional states. Ads could be targeted to teenagers, for example, at times when the platform’s AI showed they were feeling “insecure,” “worthless,” and like they “need a confidence boost,” according to leaked documents based on research quietly conducted by the social network.32 Despite the lack of research ethics of privacy protections, and the apparent profit motive for this move, one survey showed that, by a margin of 56 to 44 percent, people didn’t view Facebook’s suicide-risk detection to be an invasion of privacy.33 One would have to expect that, if the same poll were done today, the results might be very different, as Facebook spent much of 2018 and 2019 fending off a series of damaging news reports and investigations by legislators and regulatory bodies around the world relating to concerns that its privacy policies were lax at best—and unconscionable and illegal at worst.
Back to the good news: the Crisis Text Line seems to offer an example of the ways in which large sets of data can be collected and analyzed for insights relating to highly personal, sensitive topics, and make valuable contributions to enhancing health and well-being, without compromising the privacy of the individuals whose data is being reviewed. As leading voices in the field of AI continue to reiterate, the goal shouldn’t be to prevent altogether the development and use of AI. Rather, the goal should
be for humans to follow a very ML-like process of learning from the feedback provided both by failed AI experiments and by their successes, and use those results to continuously improve the approach. There’s promising work underway from academic researchers and the private sector on two fronts: how to understand what happens in the black box, and what kind of AI code of ethics is needed in the design and use of machine learning systems. The EU has legislated protections against automated decision-making, and the US Congress has been holding hearings on AI, ranging from concerns over personal privacy and bias in AI to how the technology can be effectively used. Perhaps, then, if we continue this feedback and improvement loop, it will prove that we are as capable as our machines.

please paraphrase the below paragraph: “Strategic marketing stands as a fundamen

please paraphrase the below paragraph:
“Strategic marketing stands as a fundamen

please paraphrase the below paragraph:
“Strategic marketing stands as a fundamental pillar in the realm of successful commercial enterprises, extending beyond mere promotional activities to encompass a comprehensive and forward-thinking approach. This paper delves into the intricacies of strategic marketing, emphasizing its role in planning, developing, and executing initiatives aligned with overarching business objectives. Central to this approach is a deep understanding of market dynamics, consumer behaviors, and industry trends, coupled with a keen awareness of internal strengths, weaknesses, opportunities, and threats. The essence of strategic marketing lies in the creation of a distinct competitive advantage, leveraging an organization’s unique capabilities to deliver exceptional value to customers. Through segmentation and tailored marketing efforts, strategic marketing seeks to cultivate a robust brand identity that resonates deeply with its target audience. This discussion evaluates the various facets of strategic marketing, highlighting its complexity and the interplay between internal and external factors. The conclusion underscores the multifaceted nature of crafting a strong marketing strategy, emphasizing the importance of adaptability, foresight, and collaboration across organizational functions. In an ever-evolving business landscape, the enduring principles of strategic marketing remain indispensable for achieving sustained success.”

Your assignment # 5 relates to chapter 19 Cengage Activity Video Analysis New ro

Your assignment # 5 relates to chapter 19 Cengage Activity Video Analysis New ro

Your assignment # 5 relates to chapter 19 Cengage Activity Video Analysis New round of price increases to hit grocery stores in 2022. Designed to reinforce the learning objectives of the course, and in conjunction with the final exam will provide a measure of your material’s knowledge and critical thinking skills.

Instructions
Your video analysis answers must be written in a form of essay formatted APA Style of Writing, no less than 2 full pages (300 words) per page of written content.
With multiple academic resources references/citations (textbook, and one other source of reference, preferable Pro-Quest or STATISTA data bases from the FNU Library) that will support the content of the analysis.
(The video assigned will provide relevant information that correlates to the chapter assigned)
View video “New round of price increases to hit grocery stores in 2022” Chapter 19 “Pricing Concepts”.

Discuss the following video questions:
How do price increases caused by environmental factors (recession or inflation) affect marketing strategy?
How does price affect your view of a product’s quality, brand, or benefits?

:Case Study Read the Chapter Case Study “McDonald’s—Colonel Sanders would be Pro

:Case Study
Read the Chapter Case Study “McDonald’s—Colonel Sanders would be Pro

:Case Study
Read the Chapter Case Study “McDonald’s—Colonel Sanders would be Proud, KFC is a Global Brand” from Chapter No- 8 entitled “Global Marketing” given in your textbook/E-book – “Marketing” (8th ed.) by Dhruv. Grewal and Michael Levy (2022) and answer the following Questions:
1.While expanding globally, which sociocultural factors you think have affected KFC?
2.On what basis you may differentiate the growth strategies taken by KFC in the United States and China?
3.Based on you understanding of the BRIC nations, should KFC consider expanding more aggressively into (a) India, (b) Brazil, and (c) Russia? What national features of these countries would provide reasons to support or contradict such an expansion strategy?

Part-B:Critical Thinking

1.Think about the various soft drinks that you know from your local market (like Coca-Cola, Pepsi, 7-Up, etc.). Critically examine how do these various brands position themselves in the Saudi Arabian market? (CH-9)
Important Notes: –
Avoid Plagiarism.
Support your answers with course material concepts from the textbook and scholarly, peer-reviewed journal articles etc.
Need references and use APA style for writing the references.