It is worth noting that many experts in the field believe that artificial intelligence (AI) can significantly transform the work of the Intelligence Community (IC). However, AI is not able to learn without human intervention, and specialists will have to make efforts to obtain and clean up data, compile classifications, and train machines and employees (Weisgerber, 2018). The purpose of this paper is to discuss what management challenges may be anticipated in infusing new technologies into the Intelligence analysis process and recommend management approaches for integrating technologies into the ICs work.
Challenges and Management Approaches
The main difficulties in applying such technologies lie in their expediency. In its turn, the feasibility of introducing new technologies is determined by the effect of the final results and the costs of developing and testing AI technologies as applied to the Intelligence analysis process. The rationale for the resources spent during these processes is another challenge (Jarmon, 2020). When developing innovative solutions, mistakes and forced repetitions accompanying such a process are also considered as spent resources, since they divert the ICs cognitive resource (the workforce), which is used less productively.
Moreover, project management is inseparable from the active investment of financial resources since only the merger of these two processes ensures the achievement of the target effect from the development of a new technological solution. Also, resistance to change is a challenge to overcome when incorporating AI into analysis processes (Jarmon, 2020). In particular, the staff operating the tools will need to undergo intensive training, which might cause objections from the side of the ICs workforce.
It is impossible to determine which specific approaches will be most effective since it depends on the type and form of artificial intelligence being introduced. Before applying the new system, management needs to understand how it works, what operational tasks it will perform, and in which operating environments it will be used. In particular, it is essential to make sure that the program provides an understandable decision-making procedure, which specialists of the departments will be able to verify (Scharre & Horowitz, 2018).
With the support of a trained workforce, management needs to consider how the desired results of the software used will be achieved, especially in the case of machine learning. To gain confidence in the results, management needs to ensure the transparency of the applied approaches and procedures. However, to accomplish this task, it will have to find a compromise between transparency in the decision-making process, system performance, and functionality.
Apart from that, management should make sure the goals of infusing new AI technologies into the Intelligence analysis process are in line with the ICs strategy. Insights should then be passed to technology designers and teams managers to make sure they are incorporated into the tools and processes (Allen & Chan, 2017). With the right tools and a clear strategy in place, it will be easier to educate the workforce on new approaches and minimize resistance to change.
Concluding Points
Thus, it can be concluded that the success of introducing new technologies depends not only on the usability of the selected tools but also on strategically correct management approaches. With consistent integration, artificial intelligence can become a constructive force that will resolve operational problems associated with the underdevelopment of technological processes in IC. For this reason, it is necessary to properly prepare the workforce for this infusion and offer a clear vision and action plan so that the introduction of artificial intelligence is not inhibited.
References
Allen, G., & Chan, T. (2017). Artificial intelligence and national security. Retrieved from Belfer Center for Science and International Affairs.
Jarmon, J. A. (2020). The new era in U.S. national security: Challenges of the information age (2nd ed.). New York, NY: Rowman & Littlefield.
Scharre, P., & Horowitz, M. C. (2018). Artificial intelligence: What every policymaker needs to know. Washington, DC: Center for a New American Security.
The growing number of people that are living on Earth right now creates scary projections that might be associated with many challenges. For instance, the magnitude of population growth could seriously deteriorate the food production industry. Because of this, the current directions from the UN predict the need to develop food production and enhance agriculture across the globe. Such initiatives are expected to feed the anticipated population with no limitations until 2050 (Wolfert et al., 2017). One of the few technologies that are going to support these improvements is smart farming. It is based on artificial intelligence (AI) and may be expected to facilitate the majority of agricultural processes to a certain extent, where it would be much easier to implement technological solutions and collect crops of better quality.
Owing to the development of the smart farming concept and precision agriculture, farmers all over the world gained a chance to implement digital tech to their daily operations and utilize AI to support some of the most important agricultural activities. The number of handheld agricultural tools is quickly decreasing, creating more room for the new industrial revolution that is going to move agriculture forward and contribute to a fundamental shift in how farmers view their industry (Wolfert et al., 2017). The current paper represents a thorough review of the existing evidence on why smart farming is beneficial and how the new technologies could be used to support farmers. The implications of utilizing smart farming and future research directions are also addressed to outline the forthcoming trends in AI-driven agriculture.
Background
To start with, the whole concept of smart farming is based on several technologies that are consequently developed to respond to the growing demand in the agriculture industry. AI-based smart farming includes multiple sensors that can be used to read and process information concerning humidity, soil condition, and water and temperature supervision (Walter et al., 2017). Farmers may also use smart technologies to gain more insight into networking and the usage of GPS tracking. On the other hand, there are multiple IoT-driven solutions that might include (but never be limited to) automated tools, robotics, and many other specific hardware and software tools. Speaking of software, smart farming seriously benefits from data analytics, as it allows them to predict and monitor climate change, crop yields, weather data, and other variables that are vital to the farming industry and agriculture in general (Bhange & Hingoliwala, 2015). The entire field can be easily assessed by drones and satellites that easily track the region and collect relevant data without major human interventions.
Given the fact that agricultural efforts now are quickly translated into the digital framework where the majority of tasks can be completed remotely, the advent of machine-to-machine (M2M) data collection becomes even more critical (Sa et al., 2017). The decision-making systems available to farmers are easily populated with the data from the fields and offer a great degree of detail. With the help of new technologies, farmers can pick the best strategy when adapting their measures to the field, increasing the efficacy of fertilizers and pesticides (Walter et al., 2017). Much more sensible utilization of these instruments promotes the usage of smart farming techniques and makes AI-based systems a vital element of agricultural strategies, as it enhances the condition of the field and helps farmers track herd health in real-time.
Details & Description
Precision agriculture and smart farming have become the two essential contributors to the popularization of digitalized agrarian science. The existing farming practices were significantly enhanced with such technologies as driverless tractors, non-human planting and seeding, automated irrigation, remote crop maintenance, drone-based crop and herd tracking (Pivoto et al., 2018). The lack of human error increased the quality of products in the agricultural sector and improved production efficiency to a certain extent. Another evident consequence of smart farming being implemented more often is the growing quality of life among farmers who do not have to complete endless heavy and monotonous tasks anymore. Digital technologies are currently changing the image of farming and creating more opportunities for farmers to look after crop yields and animal health more vigilantly (Eastwood et al., 2019). With the help of smart farming, experts in the field of agriculture are recurrently addressing labor issues, climate change, and population growth.
The advent of real-time monitoring and analysis technologies have created multiple benefits for farmers. Practically any element of agriculture can be translated into the digital environment with no actual losses, which makes the new industrial revolution a significant trend that cannot be overlooked on the way to agricultural initiatives that are entirely led by technology with minimal human intervention (Bronson, 2018). Based on the existing evidence, it may be concluded that there are three large pillars of smart farming that have to be nurtured to gain access to even more benefits: (a) the Internet of Things, (b) autonomous robots, and (c) drones (OGrady & OHare, 2017). Each of these categories significantly contributes to the transformation of farming activities, where agriculturalists get a chance to gain more digital knowledge and monitor their assets remotely.
Methodology of Implementing AI in Farming
In order to implement AI in farming, experts in agriculture have to analyze their ground data and then find the best ways to analyze different weather conditions and additional sensors in real-time. In order to make the best use of AI in farming, these experts have to possess extensive knowledge in technologies and realize the value behind gaining access to soil conditions and other contributors to informed decisions (Andrewartha et al., 2015). Additionally, the implementation of AI technologies should be performed with the primary intention of optimizing planning procedures. In this case, experts will have to determine the right crop choice and pre-plan utilization of all available resources. With a variety of improvements related to harvest quality being the main idea behind the implementation of AI systems, experts have to be as precise in their actions as possible to protect automated systems from human error (Xin & Zazueta, 2016). Accordingly, AI sensors will then serve as hunters that help farmers find diseases in plants and make informed decisions on what herbicides or pesticides to use.
Another essential element of AI implementation is the willingness to overcome the labor challenge. Even under the condition where many farms are going through a period of severe workforce shortage, experts should still contribute to the development and deployment of AI-based farming to achieve more significant results (Xin & Zazueta, 2016). The trend to watch out for, in this case, is going to be the decreasing number of seasonal farmworkers. The number of workers will go down due to automated crop harvesting and other operations that were previously completed by human employees. When implementing AI to agriculture, stakeholders should carefully pick the most suitable employees with required competencies in order to limit the shortage of job positions and preserve the value of the human contribution. One more potentially important element of AI implementation are chatbots that can be of two-fold assistance to farmers. Experts will have the possibility to provide their apprentices with recommendations and answer their questions while also gaining insight into specific farm issues in real-time (Ravazzani et al., 2017). The implementation of smart farming initiatives can be performed at farms of any size, leaving the room for additional improvements.
Implications of AI in Smart Farming
Owing to the controversial nature of smart farming, the use of AI in agriculture creates both positive and negative implications. The most important thing about utilizing smart farming technologies is that it opens the prospect of soil sensing. It means that farmers fields can be easily tested for various nutritious constituents, condition of irrigation channels, or even the health of the crop (Rose & Chilvers, 2018). This information can be accessed in real-time, allowing farmers to make decisions based on their current status and available equipment. Another favorable implication of utilizing AI in farming is that the necessary resources can be conserved promptly. The smart farming system is going to apply a required amount of water and fertilizers to the areas necessary only, averting potential human errors. The usage of intelligent farming can be deemed as a yield-maximizing initiative that contains valuable information on practically anything from humidity and soil conditions to environmental temperature and precipitation predictions (Rose & Chilvers, 2018). The implementation of AI in farming helps agriculturalists reduce the usage of electricity and pay more attention to data collection and wireless monitoring instead.
Nonetheless, there are also negative implications related to the application of AI to farming procedures. The biggest issue related to smart farming and its derivatives is the necessity to have a high-quality, uninterrupted connection to the Internet (Ahmed et al., 2018). This puts the majority of rural communities at a severe disadvantage, primarily if the given collective farm is located somewhere in a developing country. Mass crop production in developing countries would require major investments due to the potential installation of tens or even hundreds of thousands of sensors. This would make AI-based systems inoperable and excessive in terms of their cost.
On the other hand, the implementation of AI requires the local community to have an exceedingly high knowledge of ICT and robotics (Schonfeld et al., 2018). The lack of precision and technical skill would make smart farming a useless, but a rather costly asset. To conclude, the lack of expertise might be the first item on the list of discouraging factors that slow down the implementation of smart farming across the globe.
Future Research
With all the advancements in the area of smart farming, it may be safe to say that there are even more improvements that are going to impress the farming world in the future. Drones, robots, and tracking technologies are just precursors of what farmers could benefit from in the future. This places a serious burden on the shoulders of smart farming researchers who will have to investigate the newest trends in the field and ensure that the fresh ideas are going to be implemented as soon as possible. One such direction is the blockchain technology that operates based on the Internet of Things (Ahmed et al., 2018; Pivoto et al., 2018). Multiple data sets regarding crops could be transferred simultaneously while being properly encrypted against potential hacker attacks. The lack of research on the blockchain is a crucial concept that has to be addressed by experts in smart farming.
Another weak area of smart agriculture that has yet to be strengthened by additional research is the usage of sensors. Additionally, new sensors could also be based on blockchain, allowing farmers to identify pH levels and sugar content (Bronson, 2018; Rose & Chilvers, 2018). As the population grows exponentially, farmers will have to install multiple new sensors to gain more control of the crops and ensure that all the data points are efficiently processed by automated AI-based systems. The process of using drones in agriculture has not been studied to the fullest as well. Some of the potential benefits of introducing drones to farming may also include improved spraying techniques and greater control over crops with remote decision-making capability.
Conclusion
As a relatively undeveloped branch of agricultural science, AI-based instruments and smart farming, in general, can be considered the most viable path to continuous advancements. All the existing improvements in the area of smart farming show that the popularization of technologies had a positive influence on agricultural activities as well, helping farmers from all over the world save time and money when tracking the health of their crops and herd. Farmers are now free to use different sensors and the Internet of Things to collect all types of data and improve irrigation, planting procedures, or manage temperature without even visiting the field in real life. The increasing accessibility of intelligent software and hardware makes it reasonable to assume that the future of farming depends on the digitalization of its major processes and the advent of new technologies that are going to help farmers gain even deeper insights into their assets. Water and fertilizers are essential resources that have to be conserved by farmers, and the use of AI in smart farming could be the shortest pathway to proper agricultural maintenance of available inventory.
On the other hand, smart farming may be helpful in terms of protecting the environment from the negative impact of human activities. Predictive techniques included in AI-based instruments will help farmers from all over the world keep their fields and herds in order and collect vital data from thousands of sensors in real-time. Nonetheless, there is also a need for constant research in the area that would improve the existing techniques and come up with new ones. In turn, this would facilitate farming practices and help farmers ensure that difficult tasks are completed remotely, with the help of software and hardware that run on AI. As the current evidence shows, smart farming requires rigorous investments and a lot of persistence. There is no other way to develop smart farming rather than build more unique sensors and deploy additional preventive agrarian techniques. The current paper proves the need for the implementation of more elements of smart agriculture to conventional farms to increase their effectiveness and protect the environment.
References
Ahmed, N., De, D., & Hussain, I. (2018). Internet of Things (IoT) for smart precision agriculture and farming in rural areas. IEEE Internet of Things Journal, 5(6), 4890-4899.
Andrewartha, S. J., Elliott, N. G., McCulloch, J. W., & Frappell, P. B. (2015). Aquaculture sentinels: Smart-farming with biosensor equipped stock. Journal of Aquaculture Research & Development, 7(1), 1-4.
Bhange, M., & Hingoliwala, H. A. (2015). Smart farming: Pomegranate disease detection using image processing. Procedia Computer Science, 58, 280-288.
Bronson, K. (2018). Smart farming: Including rights holders for responsible agricultural innovation. Technology Innovation Management Review, 8(2), 7-14.
Eastwood, C., Klerkx, L., Ayre, M., & Rue, B. D. (2019). Managing socio-ethical challenges in the development of smart farming: From a fragmented to a comprehensive approach for responsible research and innovation. Journal of Agricultural and Environmental Ethics, 32(5-6), 741-768.
OGrady, M. J., & OHare, G. M. (2017). Modelling the smart farm. Information Processing in Agriculture, 4(3), 179-187.
Pivoto, D., Waquil, P. D., Talamini, E., Finocchio, C. P. S., Dalla Corte, V. F., & de Vargas Mores, G. (2018). Scientific development of smart farming technologies and their application in Brazil. Information Processing in Agriculture, 5(1), 21-32.
Ravazzani, G., Corbari, C., Ceppi, A., Feki, M., Mancini, M., Ferrari, F.,& & De Vecchi, D. (2017). From (cyber) space to ground: New technologies for smart farming. Hydrology Research, 48(3), 656-672.
Rose, D. C., & Chilvers, J. (2018). Agriculture 4.0: Broadening responsible innovation in an era of smart farming. Frontiers in Sustainable Food Systems, 2, 87-94.
Sa, I., Chen, Z., Popovic, M., Khanna, R., Liebisch, F., Nieto, J., & Siegwart, R. (2017). weednet: Dense semantic weed classification using multispectral images and mav for smart farming. IEEE Robotics and Automation Letters, 3(1), 588-595.
Schonfeld, M. V., Heil, R., & Bittner, L. (2018). Big Data on a farm smart farming. Big Data in Context, 109-120.
Walter, A., Finger, R., Huber, R., & Buchmann, N. (2017). Opinion: Smart farming is key to developing sustainable agriculture. Proceedings of the National Academy of Sciences, 114(24), 6148-6150.
Wolfert, S., Ge, L., Verdouw, C., & Bogaardt, M. J. (2017). Big data in smart farming a review. Agricultural Systems, 153, 69-80.
Xin, J., & Zazueta, F. (2016). Technology trends in ICTtowards data-driven, farmer-centered and knowledge-based hybrid cloud architectures for smart farming. Agricultural Engineering International: CIGR Journal, 18(4), 275-279.
Artificial Intelligence (AI) is an ever-growing technology that allows web users to receive much information and facilitate life using elaborate algorithms. What once was thought to be science fiction about the role of cybernetics now is an inevitable reality. AI-powered innovations have changed multiple vehicles, devices, and other equipment in every sphere of human activity, but most importantly, they influenced the digital world (Abou-Zahra et al., 2018). They impacted the web developers and their users by providing a range of products and services that alleviate net surfing. In particular, artificial intelligence made it possible for impaired people by creating summarization, image, and voice recognition. Even though AI technologies can make the web more accessible to disabled people with the help of assistive technologies, they also have several imperfections.
An essential part of network availability is the versatility of the content per the needs and inclinations of individual customers. This can be a significant visual change to the content, such as changing the text style, size, and division, to make the content more fundamentally customized. In particular, progress in characteristic language provides several examples of how human-created consciousness can support such a change in substance. AI can calculate peoples needs and preferences and adapt content to them. Nonetheless, the web should also be suitable to disabled peoples intentions; therefore, AI helps them to access the net with special assistive technologies.
Primarily, language recognition technologies based on AI allow website users to translate texts and see captions and subtitles. Many leading companies created platforms for improving captioning and translating because such an approach helps disabled people receive information (Wolf, 2020). Moreover, speech recognition algorithms empower deaf or half-deaf people to use networks adjusting to their needs. Language recognition machines also help emit grammatical, punctuational, and semantical mistakes in the text, allowing users to sound more literate. Some of the innovative organizations also create different interpretations of the language and the subtitles of the disableds answers. As part of its goal of creating a more comprehensive organization, Microsoft has made Microsoft Translator, a human-made innovation of mind-based correspondence for deaf and hard hearing people. Although the design still has some flaws including wrong translation and incorrect subtitles insertion. Nevertheless, it creates numerous opportunities for impaired people to perceive the text.
Another point concerns automatic image recognition in the worldwide nets, such as Instagram or Facebook. The technology was implemented to help blind or half-blind people understand the content of the images presented. For instance, Google developed an algorithm that lets the disabled recognize images and differentiate objects in them; it also sorts the pictures to fall under the safe search category (Thompson, 2018). What is more, this innovation allows describing the photos to visually impaired web users. Therefore, Facebook launched such a tool, which is powered by neural networks (Thompson, 2018). Besides, image identification has been utilized in different domains and received much attention due to its accuracy of algorithms. As a result, multiple visual databases use this tool to organize images automatically. Although technology may be imprecise due to its low level of blurred and group photos recognition, it provides people with an incredible opportunity to identify the pictures content and maintain them in order.
Lip-reading algorithms were also created as part of the invention of artificial intelligence. They allow people with hearing impairments to receive an instant interpretation. For example, Google made a program called DeepMind that analyzed more than 5,000 hours of various TV shows in different languages and tracked lip movements to decipher them (Morris, 2020). As a result, this technology provides real-time speech recognition and translation and decodes it into text with high accuracy. This implementation has several drawbacks including poor recognition of foreign words and misinterpretation of alike words.
Finally, to alleviate users experience of accessing websites, artificial intelligence was implemented to summarize all Internet sources information. Even though the majority of websites contain videos and audio, the text remains a critical component; however, impaired people find it hard to read much information. Therefore, AI-based instruments for text summarizations were created to transform a voluminous article into a couple of paragraphs (Morris, 2020). For instance, this can help break down long and complicated information into several sections for blind and visually impaired users. It means that the technology identifies proper words for compiling and producing an accurate summary. For example, the widely recognized AI-based Salesforce model uses the most innovative tools to transmit critical information. Moreover, it helps people with cognitive issues because it can explain complicated phenomena in simple words without ruining the main idea. In general, information summarization is an effective method of perceiving and learning new ideas and facts despite having difficulties summarizing quantitative research.
To conclude, it seems reasonable to state that artificial intelligence has drastically changed impaired peoples lives by providing access to multiple technologies, especially to the digitally advanced world. Primarily, artificial intelligence helped to translate websites and produce subtitles for the videos and records so that users could see or hear additional information. AI-based innovations allowed impaired or partially disabled users to recognize the content of the images and evaluate them. Finally, it facilitated data perception by summarizing extensive articles and texts. However, it is just the beginning of the innovative technologies invasion into peoples lives.
Morris, M. (2020). AI and accessibility. Communications of the ACM, 63(6), 35-37. Web.
Thompson, P. W. (2018). Artificial intelligence, advanced technology, and learning and teaching algebra. Research Issues in the Learning and Teaching of Algebra: The Research Agenda for Mathematics Education, 4, 135-161. Web.
Wolf, C. (2020). Democratizing AI? Experience and accessibility in the age of artificial intelligence. XRDS, 26(4), 12-15. Web.
The authors work is devoted to the role of artificial intelligence (AI) in human life:
He writes about the development of AI, especially noting how computer technology has caused a renaissance of influence on processing data through AI. The authors narrative is consistent and multidimensional; he gradually concludes.
He analyzes the theoretical aspects of artificial intelligence, mentioning its characteristics.
The writer says that AI technology is incapable of forming memories and discusses four existing varieties of AI.
Despite the high quality of the letter, it is necessary to pay attention to some of the essays shortcomings.
A Reflective Reading Response
Reading the article made me think again about the importance of artificial intelligence in human life. Due to the large flow of information from countless sources, it is easy to focus on other high-tech inventions or concepts. Indeed, this would be an urgent problem if technology had initially been a subject of human interest. However, I enjoyed how the author approached the research topic: he outlined the application of the analyzed technology and expressed his position regarding the problems arising during artificial intelligence. The creation of such essays awakens interest in technology, which is essential because, as noted by the author in the final part of the essay, the scope of AI is broad.
Artificial intelligence aims to create technical systems capable of solving non-computational problems and performing actions that require the processing of meaningful information and are considered the prerogative of the human brain. I confess that for me, many facts about AI were not known before reading the article.
Assessment of the Paper
I agree with the authors position expressed by him in the essay. In particular, I found it interesting that the author noted that autonomous weapons, known as AI, are dangerous and cause fear among members of society. Even though in 2021, many people are much more worried about, for example, environmental problems, they should not ignore the existence of military conflicts in different parts of the world. Moreover, interstate conflicts are developing incredibly quickly; sometimes, one incident is enough to escalate aggression. Therefore, considering that we live in an era of technological progress, it is essential to keep military AI technologies under maximum control.
I did not find any severe logical errors in the text, but it seemed that the writer should have indicated a link to the source in some fragments of the text. If found that they are not original, Borrowing ideas without referencing them is plagiarism and is not acceptable for academic writing. However, I detected semantic correspondence between the articles used and the paraphrased authors extracts from these texts. Indeed, Bakken claims that companies increasingly opt for AI in business decision-making (Bakken).
As noted by the author, Shabbir and Tarique provide information on the strategy for successfully implementing AI in companies and organizations worldwide (5). To the disadvantages of the work, I would attribute the use of terms without a preliminary explanation and the limited application of the research results. The author himself notes that it will take a lot of time and many resources for society to benefit from the initiative.
The author presents the analyzed problems of using AI in terms of their benefits and harms, which reduces the likelihood of narrative bias; however, storytelling has certain drawbacks. In particular, the author mentions how broad is the scope of AI in business and what a vast role AI will play in achieving progress. However, the writer does not indicate that business uses weak artificial intelligence, which can only solve narrowly technical problems using extensive data methods and machine learning algorithms. Vital artificial intelligence, in turn, suggests that computers can acquire the ability to think and be aware of themselves as separate individuals. Someone might argue that only a weak AI is enough to solve business problems, but it seems that it should have been determined in what state AI is now.
Presumably, the development of technologies will bring humanity to the moment when AI receives the right to make decisions, including strategic ones. This process is facilitated by the fact that more and more collecting and analyzing information is transferred to artificial intelligence. Therefore, it seems reasonable and logical for the writer to conclude that investing in AI in security agencies ensures that members of society are protected from unwanted security threats. AI allows one to assess the situation and make a decision faster and quickly. As the author rightly points out, the information collected and processed by AI must be translated into a human-readable format so that a person can correctly evaluate and comprehend the information.
Suggestions for Changes
Concerning the use of terms in the text, the following should be noted. First, the author introduces the concept of Turings question but does not first explain its meaning, which may confuse readers. If I were to write this work, I would describe its importance to the readers beforehand since it is evident that not everyone can understand it every time the term is used. I would point out that Turings question is an empirical test suggested by Alan Turing. However, someone can argue with me by saying that the author explains the tests purpose in the following sentence. Still, this tests essence and standard interpretation are left without attention. And this is unacceptable since it interferes with the readers perception of the text.
For a more precise understanding, the author should have mentioned other approaches to AI perception since there is no single answer to what artificial intelligence does. However, nearly every author who writes a book on AI considers this phenomenon in the context of science achievement at the time of the books creation. Among such approaches, one should mention the symbolic method, which appeared first in the era of digital machines, or the logical one.
In the paper, the author identifies only three sections, including exigence, positions, and evaluation. Dividing the text into specific semantic parts improves the perception of the text and allows readers to understand in advance what will be discussed in the paragraph. However, in my opinion, it was worth dividing the positions paragraph into additional parts since this piece of text turned out to be quite voluminous compared to others.
Thus, the first paragraph should be called the computers perception of the world and the scope of AI, The second paragraph is the possibility of using AI initiatives. The heading of the third paragraph would be related to the use of AI in armed conflict, and the title of the fourth paragraph would give readers an understanding that the section would be devoted to the use of AI in business. However, this is just my assumption, and I do not exclude that the authors preference in dividing parts of the text will seem sufficient to someone.
The artificial intelligence (AI) development process has a direct exposure to supply chain regulation. With regard to globalization and multiplication of the possible delivery possibilities, human resources or thoroughly developed programs cannot compete with the analyzing power of artificial intelligence. Even though AI technologies began their history during World War II with the establishment of the Turing test, they began to be directly implied in supply chain processes only in the last decade of the 21st century (Baryannis et al., 2018). The analysis is aimed to measure the current impact of artificial intelligence presence in supply chain processes and ponder the perspectives of AI development in terms of the leading power of supply chain regulation.
Current Situation
The supply chain is one of the most important factors of the world economy operation since it links valuable parts of almost all business processes in the world and delivers them to the selling markets, where the final products are realized. As a result, without the cutting-edge level of supply chain functioning, the economy would experience significant stagnation due to the inability to assemble or furnish the final product to the customer.
One of the most notable characteristics of AI efficiency is a statistically proven increase in a companys profitability due to the organic transition from human resources to new analyzing instruments. To improve communication between counterparties, new communication methods are being employed. For instance, installing virtual chatbots that customize and reflect the preferences of their customers produce a 10% greater return on equity and 10 percent more revenue than other companies from the survey pool (Modgil et al., 2021). Another important trend for AI integration emphasizes that due to the digitalization of supply chains as a result of the industry 4.0 strategy, they are evolving into supply chain ecosystems, which are made up of interconnected businesses that coordinate operations and face similar adaptive difficulties.
A precise value proposition and a specified, dynamic collection of agents with various responsibilities describe this governance paradigm with regard to different roles, such as producer, supplier, orchestrator, and complementor. As a result, AI developing companies and intermediates offering any form of the new position in such supply chain ecosystem has an increasingly high demand for industry 4.0 solutions that would effectively integrate into businesses operating ecosystems (Hofmann et al., 2019). From a data harvesting perspective, artificial intelligence helps to observe not only the individual characteristics, which is a common process for programmed applications and behavioral analytics but also the pricing proprieties of different companies, such as packaging and delivering sector corporations.
For instance, Modgil et al. revealed that last-mile delivery is the costliest logistics step, accounting for roughly half of the total package delivery cost (2021). Last but not least, todays businesses are becoming more globalized but also less vertically integrated, which develops the complexity of distribution networks and exposes them to far more risks (Calatauyd et al., 2019). As a result, AI systems possess and apply their technical advantage in scanning a large pool of possibilities to address goods from point A to point B with the lowest duration time and regulatory or natural risks.
The Future Implementation of Artificial Intelligence into Supply Chain Functionality
When it comes to the possibilities of future implementation of AI technologies into supply chain operating activities, it is critical to focus on those aspects that have significant exposure to the industrys development. In fact, there are five of the most effective methods of organic AI application into supply chain functionality. First and foremost, a major part of the technological potential might be realized through the implication of inventory planning utilities. More specifically, customization has perspectives to be fully performed through AI technologies, which significantly helps determine future buying patterns, whether large purchases are required to retain stock in advance or whether the current inventory capacity percentage is maintained properly (Modgil et al., 2021). Secondly, internet commerce made a significant impact in experiencing supply chain disruptions during the global pandemic in 2020 and 2021. Despite the partial stagnation of global production, customers began ordering packages on internet marketplaces, which has disclosed another important niche for supply chain functioning (Modgil et al., 2021). More specifically, AI might be utilized to discover local suppliers, as many things do not require the use of worldwide vendors.
In addition, artificial intelligence assists in creating an effective and robust supply chain from local suppliers through vendor management solutions, such as credit management or vendor evaluation. As a result, this operating function could be attached to software analysis ecosystems to enforce the synergy effect of efficiency increase. Thirdly, artificial intelligence might be utilized to increase routine operation execution, such as package tracking. In fact, during the high-intense periods of deliveries, many companies face significant issues with achieving to execute the final package distribution before the previewed time. In many cases, customers experience deliveries delays and begin tracking their orders to understand their current situation. In this case, standard programmable software cannot demonstrate constant operating success due to unstable information updates.
However, artificial intelligence technologies might execute this routine operation with a relatively higher efficiency percentage. The cutting-edge technology would estimate shipment delivery time variations and preview the amount of risk connected with a cargo based on trend research. Fourthly, artificial intelligence technologies would certainly advance the quality of risk management in supply chain functioning. The adoption of AI developments is based on its future capacity to evaluate data, find the exact source of risk, and promote transparency among supply chain partners. These functions might be very useful in detecting the risk associated with various intersections of the distribution network and providing effective and timely remedies (Modgil et al., 2021). Last but not least, AIs capacity to execute numerous experiments and calculate the possible outcomes would influence the process of market analysis.
For example, sophisticated analytics using AI may be used to anticipate future outcomes and market tendencies. At the same time, data may be examined utilizing the unrivaled computational capacity to forecast future demand or better understand client purchasing habits. In todays world, it is crucial to not only anticipate the changes but also define the specific applications of AI technology in the supply chain functioning to benefit from its development.
Conclusion
To summarize, artificial intelligence is one of the most powerful tools for supply chain development, and its opportunities are already partially realized. From the current perspective, artificial intelligence helps to understand the functioning of standard operations and collaboratively define certain quantitative measures based on customers expected behavior. However, the whole potential is currently unrealized, which makes artificial intelligence a perspective domain for developing risk management in supply chain, routine operations execution, vendor performance analysis, packages tracking, and estimating market behavior.
Calatayud, A., Mangan, J., & Christopher, M. (2019). The self-thinking supply chain.Supply Chain Management: An International Journal, 24(1), 2238. Web.
Modgil, S., Singh, R. K., & Hannibal, C. (2021). Artificial intelligence for supply chain resilience: learning from Covid-19. The International Journal of Logistics Management, of. Web.
Artificial intelligence (AI) and machine learning(ML) have evolved rapidly over the years with immense capabilities from defeating humans in games such as go and chess to targeted ads based on ones Internet search history. The latest technology from AI and ML is deep fake, where persons in a video can be replaced by someone else (Rees, 2019). Although the field of AI is consistently producing sophisticated technologies, most of them tend to be highly esoteric and a preserve of nerds (Rees, 2019). Deep fake technology has immense pros, such as the ability to create videos with extreme viral potential.
One pro of this technology is its ability to be applied to art. An example is classic movies bringing dead actors back to life in remakes or sequels (Pros and Cons: Deepfake technology and AI avatars, 2020). For sure, Star Wars fans would be excited to experience Peter Cushing again; a trailer movie with such a breakthrough has immense viral potential if posted on Twitter. Companies could also recreate their famous ads from the past; the possibilities are endless.
Of course, there are cons associated with deep fake technology; its core concept is to create a fake so good that it can be considered authentic. Seemingly, its real strength is also its greatest weakness in that it can create deception (Chandler, 2020). A situation can emerge where rogue actors make a fake message from a companys CEO, hack into their official Twitter and post it. Viral news is known to cause stock market crashes with billions in losses. The consequences are as dire if it happened to a presidents account. By the time it is revealed that the video was fake, the damage would already be done.
In the past few centuries, the importance of policing is increasing every year due to rapid population growth, which requires strong policing to maintain a safe social environment. Also, fast technological progress resulted in the development of various equipment and technologies that can be used in policing, transforming policing techniques and policies. In such a rapidly changing environment, countries should find a way to meet the requirement of a new world to keep a peaceful society within the country and to keep the crime rate under control. The growing power of artificial intelligence is going to significantly influence future policing.
Main body
Artificial intelligence (AI) in policing brings both benefits and risks, shaping a new way of policing. Human profiling is one of the benefits of AI for policing. It enables the prediction of the behavior and decision-making of the person, which can increase the prevention of crime. Moreover, future AI in neurotechnology will be able to identify the mood, psychological state, and intention of the person (College of Policing, 2020). It will only widen the possibilities to obtain information from a criminal or a suspect. However, it also raises ethical concerns regarding the use of such technologies.
Conclusion
To conclude, the implementation of artificial intelligence along with surveillance technologies will help policing maintain control over a big population. Artificial Intelligence allows policing to effectively prevent potential criminal events via the prediction of a persons behavior and other neurotechnological advancements. At the same time, the use of artificial intelligence to interfere with the personal lives of the population raises a list of ethical questions regarding the moral aspect of the technologies. Hence, artificial intelligence is going to significantly influence future policing, bringing a range of benefits along with ethical concerns to be discussed.
As practice shows, awareness and understanding of the role of technologies in society through the prism of historical events occurs due to a deeper and more conscious understanding of the development of innovations and creations over a specific period. It is no secret that a wide range of revolutionary solutions is gradually and continuously transformed in accordance with cultural contexts, taking on a new shell without changing the critical essence to be helpful to a human. For example, one can imagine that parchment and a quill have been transformed into a tablet and a stylus, and an hourglass has become digital. Thus, the lens of history is a great way to consider knowledge and understanding of society and technology from a different angle in terms of comprehending the dynamics of society and the importance of technology for the modern world.
A Current Event
One should emphasize that the trend in the evolution of artificial intelligence is one of the most relevant phenomena. It is known that this invention has existed for thousands of years; it was only by 1950 that scientists could reveal its real potential (Chowdhury, 2021). Consequently, looking through a historical lens allows one to trace the brightest moments in the development of AI, determining the cause-and-effect relationships and predicting the most likely outcomes. It was a scientific breakthrough when artificial intelligence began to play a considerable role in checkers, chess, or other logic games with the help of special programs. However, now, this technology can even draw and compose music. In addition, for instance, today, people can see how music services offer an individual selection of melodies, and washing machines remember the previous settings. Accordingly, it is evident that smart robots from science fiction books or films will soon become a reality as an assistant and a friend for humans.
Personal Experiences
The totality of historical, cultural, and technological aspects form the worldview and way of thinking, according to which a person understands better how to solve an issue, act, and use innovations. The totality of these elements has a unique and close connection with each persons individual experience. Sometimes people do not notice how history, technology, and culture become not just a part of life but are vital determinants that specify an individuals background, behavior, type of activity, or interaction with people and the environment.
In the US, psyllids are commonly known and designated as citrus greening infection. Farmers who specialize in citrus production in Florida expect more than $7 billion in income in return for their investments (Ampatzidis et al., 2019). Scientists have had an interested to identify trees with the condition and eliminate them from fields. There have been several trees infected with Huanglongbing (HLB) found in California, with issues being how to manage or identify the condition with ease (Byrne et al., 2017). It is common for farmers to take immediate action after identifying a tree with the illness to ensure they reduce the probability of the infection from spreading to the other healthy plants in the surrounding (Ampatzidis et al., 2019). Artificial Intelligence (AI) has offered innovative solutions to capture and record information that facilitates and improves the research and development processes. For instance, SeeTree utilizes robots, sensors, and people to catch pictures and record information that AI can parse for data. The outcome is an itemized investigation of every individual tree on a plantation with a harvest yield examination and wellbeing profile.
Sophisticated technology has developed across the world allowing industries to adopt cost-effective production methods that increase the profits realized at the end of each operating period. Industries globally have invested in research and development to come up with creative and innovative ways in which they can simplify the work done (Tang et al., 2021). Agriculturists have also found the methods to ensure crop production is at a high level; management of plant diseases is easy and resilient seeds that can survive different weather conditions in various parts of the world. Citrus farming is one of the most profitable activities globally with farmers producing raw materials to manufacture different products. Demand for citrus has increased over the years which motivates farmers to increase their yield every harvesting period. However, the dangerous psyllids have been a threat that discourages farmers from investing in the fruit due to the impact the disease has on the plant both in the short-term and in the long-term.
Psyllids are highly infectious since it has a high probability of being infected with greening, which then transmits and spreads the disease to citrus plants. According to Ampatzidis et al. (2019), greening has spread to more than 40 nations across the globe, affirming that other countries are at risk due to the trade activities and open boundaries. For instance, in Florida County, the production rate decreased by more than 70% between 2000 and 2017 (Ampatzidis et al., 2019). Farmers were mainly discouraged and affected by the tremendous losses resulting from greening after psyllids penetrated the region in 2005.
Background
Asian-based citrus psyllids are some of the smallest trees that produce fruit, but they affect global production due to their risk of getting the greening disease. Psyllids have been detected using the tap sample method when they strike branches and affect the entire plant as well as its probability to increase in size. Local farmers from Florida reported cases of psyllids in their farms after electronic methods were used to detect the presence of the condition (Byrne et al., 2017). Automated systems were easier, faster, and more effective in collecting and analyzing the data from the machine vision. AI has been incorporated into the equipment used to capture trees images and insects that attack the citrus groves (Partel et al., 2019). The justification for using AI in the management of citrus diseases is that it has the comparative advantage of differentiating psyllids and other pests that attack the citrus groves and affect their growth rate.
The AI-enabled machine comprises a tapping mechanism that highlights some branches that have been pre-selected with a grid of cameras. After pictures have been taken and developed, the AI algorithm has the power and capacity to analyze images, and potential defects, and quantify the adult psyllids (Byrne et al., 2017). The justification for using AI-based systems is that they have a high accuracy rate with psyllids being detected and identified at a 90% precision level (Ampatzidis et al., 2019). Farmers use of digital systems in enhancing crop production which allows them to have high yields in the long term.
Each citrus grove that has a camera sends specific information about the psyllids attacking such a plant, with the data collected enhancing the development of maps. This means researchers and farmers are more informed and can apply or spray the right amount of pesticides on plants that have pests (Tang et al., 2021). Protection of the environment through agrochemical use and other associated expenses for the farmer is reduced by a large margin since only the right quantity of pesticide is applied. Harvest management and yield mapping systems are embraced in the farms which ensure that all plants that would generate incomes for the farmers have been incorporated. Sustainable methods that would support citrus farming would enhance the use of modernized techniques and technologies to identify the stress status of plants as well as crop health (Deng et al., 2020). Detecting early diseases and pests affects the yield produced at the end of each period. AI makes it easy to distinguish other deficiencies in the citrus fruits, recognizing the fact that other conditions may have similar symptoms, and if left untreated, may affect the production levels.
Details and Description
AI can play an important role in detecting psyllids in citrus framing in the management of citrus greening. AI is suitable for the management and detection of diseases since it is fast and easy as it uses technology to collect and analyze all relevant data. Ampatzidis et al. (2019) claimed that knowledge about the psyllid populace is crucial for all citrus growers as it allows them to make informed decisions on the best AI system to utilize. The conventional method of detecting psyllids on a branch, the tap sample method, is tedious and labor-intensive compared to the robotic system that incorporates machine vision as well as artificial intelligence. Automating the scouting process would simplify the work done by machines and it would be possible to detect other insects using the system (Deng et al., 2020). Specialized AI-based software is inserted into the machine with the GPS device which is used to easily locate each tree on a farm. Later, the software generates a map informing the farmer of all psyllid infections in their farms. The developed psyllid data allows the farmers to make the right decision.
AI is best suited for the detection and management of psyllid due to its ability to attack all different types of citrus plants in the Rutaceae family. The pest attacks the new budded citrus leaf with its toxin saliva which results in the leaf turning black. The psyllid causes more damage to a farm since it results in the Huanglongbing (HLB) disease which leads to the shoots turning yellow. Fruits become asymmetrical in shape, get bitter juice, and seeds get aborted in the process (Nakabachi & Okamura, 2019). There is no cure for the psyllids disease and within 5 or 8 years, a citrus tree dies and is no longer beneficial to the farmer. Early detection or identification of the psyllid pests is crucial to the survival rate of the rest of the farm. Ampatzidis et al. (2019) claimed that the Asian citrus psyllid first arrived from Mexico in 2008 and was detected in California in the Southern parts, affirming that immediate response should be administered whenever it is first identified. Illegal imports from Mexico may have brought the pests in the US to different states.
AI is best suited for the detection and management of psyllids since it allows farmers to reduce the transmission of pests from one part to others. Pesticides have a high ability to reduce the number of psyllids on plants, but since it is impossible to kill all adult psyllids that spread the bacteria, AI plays a great role in keeping the psyllid numbers the minimum on each orchid farm (Rehberg et al., 2020). Identification of the adult psyllids through AI systems is viable through the constant visual surveys taken through the digitalized technologies. Visual monitoring systems make it possible for farmers to eliminate the eggs and young nymphs which means their probability of becoming adults and causing more damage is low (Nakabachi & Okamura, 2019). Pesticides used in citrus farms vary since they are unable to kill the psyllids in all their life-cycle stages. It is common for some pesticides to be effective in one life cycle compared to another.
Methodology
The study employed a secondary research method and design to increase the accuracy and validity of the results presented at the end. A qualitative method was applied in the research with results from other scholars being examined to identify what has already been addressed in the issue of psyllids in citrus farming. Research is a process of collecting information that widens ones understanding of a particular issue. This methodology will emphasize the use of the secondary research approach, which comprises the collection and analysis of already collected and analyzed data from previous research on the subject (Creswell, 2015). This method is the best option for this exploration since the issue of interest is how AI is relevant in the management of psyllids in citrus plants.
Constant studies are being conducted by researchers as they examine alternative methods of managing psyllids since there is no current cure for the condition. Even though it is time-consuming, this method is accurate and precise. The selection of this design focuses on the particular amount and type of data that is relevant to the objective of the study. Furthermore, this design provides significant information that can help in improving the status of this study and rectifying any issues regarding the subject. Secondary research provides efficient data from numerous studies carried out by different authors. It offers an easy platform for the author to find all the information needed since it is readily available and the researcher has enough time to summarize and edit the findings. Although the primary data collection method is the best strategy for finding all the information required, in this context, the secondary research method is the most applicable when the researcher uses qualitative methods to gather facts from different sources (Esser & Vliegenthart, 2017). The design used in this examination does not affect the analysis or data collection of its sources, unlike other exploration methods. The analysis collected in the research using the data gathered with the qualitative approach provides an efficient platform for editing and solving the issues analyzed.
The secondary research method finds data in journals, newspapers, books, and websites. The data used should be trustworthy in that the sources will have a publication year and date, the authors name, the particular published place, and the references (Esser & Vliegenthart, 2017). A high percentage of this review is from journals that will all have met academic standards so that they can be used by students. Secondary research can still bring out the qualitative aspects of the study. The qualitative approach mainly reviews documents from secondary research. The use of a qualitative method in this study provides satisfying outcomes in identifying the relevance of AI in farming.
Working Analysis of AI in Citrus
AI technology has penetrated the farming industry with producers embracing precision tools and techniques that enable the collection of high data volumes for effective and efficient decision-making processes. In a contrast, conventional methods of collecting data related to crop production and yield or management of vast commercial farms were almost impossible since they required the physical involvement of the farmers (Barbedo & Castro, 2020; Partel et al., 2019). Digital methods are simpler and faster, and their high accuracy levels ensure that all areas of the fields have been captured. AI in citrus farming is crucial as it allows automated aerial surveys with digitalized drones to view the groves (Ampatzidis et al., 2019). AI as a machine has the capability of using computer algorithms with little or no human intervention, with farmers only being required to collect the processed data.
Sophisticated models have been designed and trained to capture psyllids, with eggs that have been freshly laid by the females being detected immediately. This justifies why it has been easy to understand the different life-cycle processes and patterns the psyllids undergo, and it has been possible to develop pesticides best suited for every stage (Nakabachi & Okamura, 2019). With more knowledge about psyllids being available, there is a high probability that better management methods will be developed in the future. The convenience created by AI research cannot be ignored due to the simplified real-time information it provides to the farmers. Photographs taken by the attached cameras provide clear pictures that enable researchers and farmers to identify objects and relevant situations that affect the growth of citrus fruits including other pests, weeds, or diseases (Ampatzidis et al., 2019; Lu et al., 2019). AI has also facilitated the use of alternative methods of nurturing and growing the citrus groves, especially in protected screen houses that have a low probability of being exposed to the Asian citrus psyllid (ASP). Factors that contribute to the exposure of psyllids in normal orchard farms are prevented.
Advantages and Disadvantages
AI has played a pivotal role in the management of psyllids in citrus fruits as it has supported early detection and proactive measures taken to curb the spread of the condition. Identification of the problem at the initial stages enables farmers to take immediate measures that ensure all their citrus groves have not been infected (Lu et al., 2019). Measures such as cutting down the infected groves to limit the spread are productive. Farmers are also informed of alternative ways which help them protect their farms. New information released to the farmers allows them to save money and invest in the right type of citrus seed that is more resilient. AI has replaced the tap sample method that was time-consuming and did not provide the farmers with the correct data related to the infection rate in their farms (Nakabachi & Okamura, 2019). The AI systems use an automated system that updates itself automatically, thereby enhancing the decision-making process.
Localized crop management has been made possible through the application of AI automated systems. Tables, maps, and graphs generated by the captured images from the attached cameras summarize the developments and changes that have been taking place on the farms (Partel et al., 2019). Such data is relevant for farmers as it informs them if the levels have increased or decreased, and the measures they should undertake to curb the problem. Farmers precisely understand the condition of their farms with proactive measures to prevent psyllids from affecting their farms since such have negative effects on the health and stability of the plants.
One of the limitations is that not all farmers can easily set up an AI system on their farms as the installation process might be expensive. AI systems are only applicable in commercial firms that generate high returns and can easily sustain such an investment. Hence, farms in some rural settings in the US might continue supporting the breeding of psyllids indirectly, which means it might be impossible to eliminate them in the country (Deng et al., 2020; Rehberg et al., 2020). Government support and intervention may allow small farms to control or eliminate the problem of psyllids in their farms.
Image processing using machine learning algorithms might be complicated and extensive for farmers with little or no knowledge, affirming that AI management of psyllids maybe not be effective in all parts of the country. This means researchers and specialists should be present on farms to handle the AI machines and interpret the data collected for the farmers (Nouri et al., 2016; Tang et al., 2021). This creates a high level of inconvenience and opposition from farmers as they consider the adoption of AI automated systems to be irrelevant and costly to maintain and manage. There is a high probability that captured images may need more processing to provide more information.
Future Research
Weaknesses and limitations identified when conducting the research act as the foundation for future research as scholars could review the issues faced by previous researchers and use them as the basis for their next processes. The research used in the study failed to identify the most appropriate methods farmers in different parts of the world have used as they have been biased in the US. Most available research addresses the history of psyllids in Florida and California states, affirming that its genesis was Mexico. Recognizing the genesis of the problem is important, but the management of the issue that facilitated the movement of the pests from Mexico to the US is equally crucial as it will prevent other crops from getting infected by foreign diseases.
Little or no consideration has been given to the management strategies Mexican farmers utilize that would also be implemented in other parts of the world using a more personalized approach based on the needs of the country. Further, there has been limited attention to the role the government plays in ensuring psyllids have been eliminated or controlled in the citrus farming industry (Byrne et al., 2017). Governments have a responsibility of coming up with independent research institutes for their local farmers to ensure resilient and higher-yield seeds have been manufactured that will withstand diseases and harsh weather conditions. Working closely with the government would enable farmers to have more confidence in the management of psyllids.
Conclusion
Management of psyllids using AI methods has been effective, with timely and informed decisions being made to help prevent the spread of the pests on farms or across the world. There is no cure for psyllids, but ongoing research and development centers have a high probability of releasing new information related to the management of the condition. Robotic systems have been developed and designed in a manner that they are self-sufficient and can independently move across the farm while collecting and sending information with little or no human assistance. Cameras attached to the AI machines capture all information, and researchers are in a position to easily identify all types of pests. AI machines have a GPS, which makes it easy for the farmers to locate them in the farms, while the camera with different photo resolutions takes images from various angles. Government intervention and assistance would contribute to the management of psyllids through the establishment of research and development centers.
Partel, V., Nunes, L., Stansly, P., & Ampatzidis, Y. (2019). Automated vision-based system for monitoring Asian citrus psyllid in orchards utilizing artificial intelligence. Computers and Electronics in Agriculture, 162, 328-336. Web.
COVID-19 is arguably the single biggest pandemic in recent human history. McCall (2020) attempts to argue the case for applying Artificial Intelligence (AI) to various domains surrounding the identification and prediction of the COVID-19 outbreak. The researcher further outlines several contemporary AI applications and sources of information relative to the previous comparable outbreak. This essay summarily reviews McCalls perspective on AI application in COVID-19 management, relative to other AI implementation arguments in healthcare.
The current COVID-19 outbreak can be compared extensively to the Severe Acute Respiratory Syndrome (SARS) of 2003. However, the magnitude of infections and mortality rate of COVID-19 severely eclipse that of the SARS despite the epicenter of both outbreaks being China. McCall (2020) contrasts the two diseases and how they spread across continents, use similar mechanisms to infect the cell, and affect animals and humans. However, the researcher also appreciates the significant tactical development in the 17 years between the two diseases in the form of Artificial Intelligence (AI).
AI is causing a significant paradigm shift in healthcare. This particular assertion is prevalent in the extant literature, including Davenport and Kalakota (2019), Jiang et al. (2017), Maddox et al. (2019), Reddy et al. (2019), and is reiterated by McCall (2020). The researcher outlines that there may be value in applying AI to the current pandemic, especially in mapping the prevalence and predicting the spread of the epidemic to other locations. Contextually, this application of AI is seen in Blue Dot, a Canadian company credited as the first organization to break the news of the pandemic in late December. But the overall question remains as to whether the capability of AI is currently at a point to deliver compelling insight in a timely, widescale fashion.
There are several primary factors necessary for an effective public health intervention to a new viral outbreak. These include comprehension of the natural history of infection, risk populations and the causative organism, and the development of preventative and control measures from epidemiological modeling (McCall, 2020). Individuals primarily collect this information at outbreak sites that are virtually connected to WHO and represent a primary source of information for COVID-19. This data can reasonably be used to prime AI to read the evidence and link it effectively to outbreaks. Further, through the review of newsfeeds, social media, and airline ticketing systems, health professionals can identify outbreaks, and areas that need further exploration. However, this data is contingent on health systems with good contact-tracing and patient isolation protocols.
AI can also be used to make a significant contribution to the current pandemic in the prediction of how COVID-19 is affected by seasonality. Based on the premise of historical coronaviruses behavior, such an application could significantly help stabilize financial markets by reassuring the gradual diminishing of the epidemic. However, the efficacy of AI application to COVID-19 is outlined as garbage in garbage out to indicate that the quality of data is significantly correlated with the insight gleaned.
China is not only the epicenter of the COVID-19 outbreak but a pioneer and supporter of AI application in helping to manage the epidemic. For instance, Infervision, an AI company located out of Beijing, China, developed a proprietary algorithm to detect COVID-19 on CT scans of the human lung distinctly and distinguish it from other respiratory infections. The application of this AI technology expedites COVID-19 diagnosis and monitoring, which further alleviates the need for governments implementing business and country lockdowns. Finally, the increased utilization of AI in reading scans allows it to learn and improve its accuracy significantly.
The death of Li Wenliang, a medical doctor and whistleblower on the COVID-19 epidemic, was indicative of the need to protect clinicians and healthcare professionals on the frontline. Hospital-associated transmission rates, for instance, from one human to another in Wuhan Universitys hospital accounted for 41% for all cases, and a thousand hospital staff were confirmed infected as well (McCall, 2020). The application of AI could ideally help protect clinicians, hospital staff, and healthcare professionals.
While a doctor can manually read a CT scan for up to 15 minutes, Infervision, the Beijing-based diagnostic AI, can read it in 10 seconds. The AI detects lesions stemming from coronavirus pneumonia and provides measurements and comparative changes with other lesions. This provides sufficient quantitative data for doctors to make a prompt decision. AI-based CT imaging could serve as a stopgap measure for doctors whenever urgent diagnosis and judgment is required. High-risk cases can be promptly identified and removed from general areas to infect patients and hospital staff.
Scholarly discourse does concede that Artificial Intelligence is still a highly novel application within healthcare, and may take some time for extensive integration. It is yet relatively early to accurately determine the capability and extent, to which AI application will impact COVID-19. However, as the mortalities and infections rise, then so does the supply of research data. Overall, AI is relatively significant to this outbreak now, and perhaps even more so in the future.
References
Davenport, T., & Kalakota, R. (2019). The potential for artificial intelligence in healthcare. Future Healthcare Journal, 6(2), 9498.
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang, Y. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230243.Maddox, T. M., Rumsfeld, J. S., & Payne, P. R. O. (2019). Questions for artificial intelligence in health care. JAMA, 321(1), 3132.
McCall, B. (2020). COVID-19 and artificial intelligence: Protecting healthcare workers and curbing the spread. The Lancet Digital Health, 2(4), e166-e167. Web.
Reddy, S., Fox, J., & Purohit, M. P. (2019). Artificial intelligence-enabled healthcare delivery. Journal of the Royal Society of Medicine, 112(1), 2228.