Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.
In response to a request by NorthWest Consultants Ltd., I have made recommendations for the use of Artificial Intelligence at Peterson Center on Healthcare. AI already has widespread ramifications that have changed the healthcare sector and Peterson Center on Healthcare want to be part of it. Nonetheless, as AI transforms patient experience and healthcare professional’s routines and workload, Peterson Center on Healthcare must address the emerging dilemmas. The major issues identified include interfering with the patients’ private and confidential data during the algorithm data analysis. Another challenge included lack of trust and accountability, with the hospital lacking anyone to blame in case the AI machine makes an error.
Without a doubt, issues of accountability, safety, and transparency remain essential in AI implementation and the solution proposed in the report include addressing legal and health implications of AI, including issues such as medical malpractice, product ownership, and liability that emerges when the health institution uses the ‘black-box’ algorithms.
Another recommendation includes using AI technology as a complementary tool and not as a replacement for healthcare professionals. User technical and professional expertise remain essential in interpreting AI test results to identify ethical dilemmas.
Artificial intelligence represents the science of developing intelligent machines and computer programs with the ability to think and operate as human beings. According to Juneja (2019), Artificial intelligence relies on a human philosophy on whether a machine can acquire the same intelligence as that of humans. The main functions of AI include developing systems with expertise in everything ranging from behaviors, learning, demonstration, and valuable advice to users. More so, AI aids in the implementation of human thinking and machine intelligence. While companies intend to adopt AI, there have been many ethical issues arising that must be addressed (Bresnick, 2018). In June 2019, Peterson Center on Healthcare has adopted and intends to adopt AI in all its operations to improve the delivery of healthcare services. Based on previous reports, AI would also enhance other operations such as logistic optimization, fraud detection, and research analysis to transform company operations.
The company intends to follow the footsteps of other giant companies such as Amazon, Facebook, IBM, and Microsoft, exploring the boundless landscape of AI. However, previous research by Peterson Center on Healthcare has found major ethical issues associated with AI, hence the need to conduct a risk assessment towards the emerging technology. The major ethical issues to be addressed in the report based on previous research include AI causing unemployment, inequality in relation to wealth distribution, AI effect of humanity and human behavior, Security of AI technology, and inability to account for AI’s lack of genuine intelligence (Peek et al. 2015). Other issues to address include the elimination of AI bias, prevention of unintended consequences and evil plans by researchers, robot rights, and ways human beings can remain in control of the complex intelligent application.
Major ethical issues raised concerning AI include the technology replacing human workers, because of better ability and more intelligence. According to (Luxton, 2014a), machine learning has allowed data scientists and robot engineers to achieve high levels of autonomous intelligence in every aspect affecting human life such as self-driving cars, disease detection, and other effective data analysis. Automation limits human labor through job automation, eliminating physical work linked to the industrialization age (Luxton, 2014a). Labor has been transformed into cognitive labor that prioritizes strategic work to manage emerging issues in the global world. The identified areas where there would be increased job losses include surgical, nursing, and radiology services.
The report confirmed the increased buzzing towards AI development, including a new generation of AI-related tools and services with the power to transform healthcare. The main areas that would affect Peterson Center on Healthcare include medical specialties, cancer detection, and radiology, with AI essential in image interpretation. More so, consumer-facing apps have the potential to offer affordable and easily accessible healthcare services to many people globally (Luxton, 2014b).
According to Peek et al. (2015), smart devices have transformed our homes in terms of security and operation and could aid the health sector through early detection of disease and proper management. However, major issues raised include trust, with current symptom-checker apps, shown to outperform doctors in disease diagnosis. This raises a major ethical dilemma on whether the smart apps are better than doctors in our healthcare facilities are currently. According to Rigby (2015), currently, the world has adopted digital therapeutics, meaning in the future the symptom-checker app could be upgraded to diagnose and prescribe another app with the power to treat symptoms.
Software prescription ability with no human aid means massive losses of jobs and the challenge includes the industry accepting the technology and patients embracing it. The research established a computer program integrated with AI has the ability to diagnose skin cancer more accurately than a certified dermatologist. More so, the program could diagnose in a faster and more efficient manner with a training data set. This means there is no need for labor-intensive medical training. Nevertheless, it remains important to understand the strengths, limitations, and ethical dilemmas associated with AI (Luxton, 2014a). Some believe that it will not take much time before doctors become obsolete due to AI which manages machine learning, natural language analysis, and robotics, all of which apply in the field of medicine.
AI has the ability to integrate and adopt huge data sets of clinical data, performing the diagnosis role and clinical decision-making process, alongside medicine personalization. The perfect example analyzed in the report includes Peterson Center on Healthcare employing an AI-based diagnostic algorithm used in mammograms to detect breast cancer. According to Rigby (2019), this could serve as a second opinion to results provided by radiologists. Another identified area that brings out numerous ethical challenges includes virtual human avatars that could engage in important conversations that directly affect diagnosis and psychiatric treatment. The employment issue is vastly extending to the physical realm such as robots, the physical support systems, and manipulators helping in telemedicine delivery.
Health careers most affected include radiologists and pathologists, because major AI breakthroughs are in imaging analytics and diagnosis. Using Peterson Healthcare facility as a case study, the hospital has fewer radiologists, surgeons, primary care providers, and pathologists, hence making one wonder why AI should replace healthcare staff. The US has a shortage of physicians, particularly in rural areas and the situation is even worse in developing countries (Rigby, 2019). Therefore, help from AI technology will help physicians meet their demands of high cases and manage complex patients, especially in the US where Baby Boomer individuals have increased and will require better healthcare.
Similarly, AI could help manage burnout issues among physicians, nurses, and care providers that would likely reduce their working hours or retire early. Automation of routine tasks that consume healthcare providers’ time such as CT scans could help free time for physicians to handle complex challenges among patients.AI could blend human experience and digital automation, with the two working together to enhance healthcare delivery (Luxton, 2014a). AI handles complex tasks beyond human ability among them breaking down gigabytes of raw data from different sources in a single coded risk score for patients. However, the rush to manufacture robots to engage families of patients regarding a patient remains to be seen.
AI as a powerful technology has raised major ethical issues surrounding safety and privacy, caused by major weaknesses in AI policies and ethical guidelines to advance the healthcare field. The report established that the medical society, lack information related to patient safety and protection. A major issue emerging involves added risk to patient privacy and confidentiality, which interferes with the boundary between healthcare professionals and the role machines play in inpatient care (Rigby, 2019). Addressing privacy issues remains important in changing the education of future doctors to adopt a proactive approach in medical practice. Artificial intelligence algorithms require accessibility to huge datasets during training and validation (Bresnick, 2018). The process to exchange huge data sets between different applications exposes many healthcare companies to financial, reputational, and data breaches likely to affect their operations.
Healthcare institutions must guard their data assets abiding by the HIPAA compliant applications, because of the increased cases of ransomware and cyber-attacks. According to Rigby (2019), many companies including Peterson Center on Healthcare remain reluctant in sharing and moving freely patient data outside their systems and the storage of huge data sets in a single location turns the repository into an attractive target. The solution could include the development of blockchain technology to enhance the personal identification of information from the many data sets (Juneja, 2019). Many consumers, however, remain in doubt regarding the technology, with a 2018 Survey by SAS showing that less than 40% of patients showed confidence that their personal healthcare data managed under AI is safe and secure (Rigby, 2019).
A report in 2013 regarding the privacy of patients’ information confirmed that about 90% of patients must remain conscious regarding their data in healthcare companies because they have the potential to bungle their privacy rights during analytics. Security and privacy remain paramount. Thus, the need for all affected stakeholders to become familiar with the challenges and opportunities provided through data sharing to help AI grow and form part of the IT ecosystem. Data scientists require clean, precise metadata and multifaceted data to ensure AI algorithms identify important red flags and provide meaningful results.
It is assumed that AI is better than humans are because it reduces human medical error that harms and kills patients. However, there is a possibility that smart machines could cause specific medical errors, leaving a major dilemma on who becomes accountable. Google has conducted research to establish if machines have the capacity to support decision-making in a healthcare setting, predicting future occurrences (Juneja, 2019). When a doctor misdiagnoses a patient, the hospital, Doctor, and the doctor’s association remain accountable. With advancing technology, when humans could interpret the decision-making process of computers, then a trustworthiness barrier emerges. A machine should remain free of errors and biases before being trusted, including gender and race bias, which should be filtered into an algorithm.
Without a doubt, Peterson Center on Healthcare must adopt AI and associated technology, but first, hold dialogue regarding ways to enhance patient-doctor understanding on the role AI plays in the sector. This would help stakeholders develop realistic comprehension of AI, identify potential pitfalls, solutions and provide policy recommendations on the benefits of AI. A major strategy includes balancing benefits and challenges associated with AI technology 9Luxton, 2014a). The benefits include enhanced and efficiency of healthcare, but there is a need to reduce the risks especially threats on the loss of jobs, privacy, and confidentiality of patient information (Peek et al. 2015). Other issues to address include informed consent from patients and their autonomy.
The institution must remain flexible when adopting AI technology, by first using it as a complementary tool and not to replacement to healthcare professionals. User technical and professional expertise remain essential in interpreting AI test results to identify ethical dilemmas. The company could learn from the usage of IBM Watson, which is an important clinical decision support tool, which has helped the health industry understand the limitations and benefits of AI (Luxton, 2014b). Proper informed consent requires transparency, hence the need to ensure the adopted systems provide that to stakeholders.
It is important to address the legal and health implications of AI, including issues such as medical malpractice, product ownership, and liability that emerge when the health institution uses the ‘black-box’ algorithms. This is important because users cannot offer a logical explanation of how the algorithms arrive at a particular output. Currently, there is a policy gap in the governance and safeguarding of patient photographic images including application in facial recognition technology, hence threatening informed consent and data security.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)
NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.
NB: All your data is kept safe from the public.