Artificial Intelligence: The Trend in the Evolution

The History Lens

As practice shows, awareness and understanding of the role of technologies in society through the prism of historical events occurs due to a deeper and more conscious understanding of the development of innovations and creations over a specific period. It is no secret that a wide range of revolutionary solutions is gradually and continuously transformed in accordance with cultural contexts, taking on a new shell without changing the critical essence  to be helpful to a human. For example, one can imagine that parchment and a quill have been transformed into a tablet and a stylus, and an hourglass has become digital. Thus, the lens of history is a great way to consider knowledge and understanding of society and technology from a different angle in terms of comprehending the dynamics of society and the importance of technology for the modern world.

A Current Event

One should emphasize that the trend in the evolution of artificial intelligence is one of the most relevant phenomena. It is known that this invention has existed for thousands of years; it was only by 1950 that scientists could reveal its real potential (Chowdhury, 2021). Consequently, looking through a historical lens allows one to trace the brightest moments in the development of AI, determining the cause-and-effect relationships and predicting the most likely outcomes. It was a scientific breakthrough when artificial intelligence began to play a considerable role in checkers, chess, or other logic games with the help of special programs. However, now, this technology can even draw and compose music. In addition, for instance, today, people can see how music services offer an individual selection of melodies, and washing machines remember the previous settings. Accordingly, it is evident that smart robots from science fiction books or films will soon become a reality as an assistant and a friend for humans.

Personal Experiences

The totality of historical, cultural, and technological aspects form the worldview and way of thinking, according to which a person understands better how to solve an issue, act, and use innovations. The totality of these elements has a unique and close connection with each persons individual experience. Sometimes people do not notice how history, technology, and culture become not just a part of life but are vital determinants that specify an individuals background, behavior, type of activity, or interaction with people and the environment.

Reference

Chowdhury, M. (2021). Analytics Insight. Web.

Posted in AI

Artificial Intelligence Algorithms and Methods to Use

Introduction

Many algorithms are used in AI, and they all have strengths and weaknesses. One of the most common is the support vector machine (SVM), which is used to classify data into one of many categories.

Discussion

SVMs strength is that it can work with large datasets, but it has a high learning rate and requires large amounts of training data. Another standard algorithm is neural networks, similar to SVM, in that they learn from data like humans (Chang et al.,2018). Neural networks can be trained to recognize patterns in data, but they are also sensitive to noise and require more training time than SVM. Another type is a random forest classifier, which uses multiple decision trees to make predictions about new examples that have yet to be seen or have not been seen often enough for classification algorithms like SVM to work well on them.

One of the strengths of using one algorithm over another is that it can be more easily adapted to fit the needs of different problems. For example, if one is trying to find a solution to an optimization problem and has an algorithm that performs well in that environment, it is best to stick with it. Nevertheless, if one is working on a problem where many factors affect results, something else is recommended.

The strength of choosing one algorithm over another is that they tend to produce similar results for similar problems. This means that if ones computer has access to one sort of algorithm and has already run it many times before without finding an optimal solution, it should be able to find one quickly using whatever variation given. However, The most significant weakness of this approach is that it can take quite a bit longer than other methods, especially when one is trying to solve problems that require human input or creativity.

Conclusion

There are a lot of different methods to choose from when it comes to AI Algorithms; some include reinforcement learning and support vector machine methods. I use the reinforcement learning algorithm since this method can teach me how to solve problems and make decisions without being told how. A reinforcement learning algorithm can be used in many different ways, such as when it comes to machine learning or robotics. The main reason I would use this method is that it allows for flexibility, which means it can be used in any situation or for any purpose.

Reference

Chang, C. W., Lee, H. W., & Liu, C. H. (2018). . Inventions, 3(3), 41. Volume 3. Web.

Posted in AI

Propositional and First-Order Logic in Artificial Intelligence

Artificial intelligences methods for discovering solutions to problems can be implemented with and without understanding the domain, depending on the circumstance. AI decision-making has been studied from various functional and industrial viewpoints in academic and practical literature. As AI-based services continue to advance, more personal and important decisions are being left to the technology. However, the two fundamental principles of propositional and first-order logic are the foundation for all AI-based technologies.

The first core element is propositional logic, which uses Boolean reasoning to transform real-world data into a machine-readable structure. Such reasoning applies to knowledge-based expert systems or AI-based systems that make decisions or judgments similar to those of experts. Completeness, consistency, and tractability may be required to very effectively and efficiently formulate the domain knowledge as theory (Neapolitan & Jiang, 2018). Complex prepositions link one, two, or more sentences together. In propositional logic, the syntax used to indicate the joining of two or more sentences is created using symbols. An appropriate structure for presenting information is known as syntax.

Artificial intelligences propositional logic analyzes sentences as variables, and in the event of complicated sentences, the first phase is to deconstruct the sentence into its component variables. The stages that carry out the intended task are hard-coded in the script when a system is designed procedurally (Kakas et al., 2017). The knowledge and reasoning are divided into the declarative method. This strategy helps develop wumpus algorithms that reason with knowledge bases and when using forward and backward chaining to reason with rules. Hence, propositional logic in artificial intelligence is essential to realize the promise of machine learning and decision-making fully.

Another method of information processing in artificial intelligence is first-order logic. To concisely convey the natural language statements is an extension of propositional logic. A machine that uses a comprehensive knowledge base, such as First Order Logic, might be able to reason about a wide range of global issues. It could plan appropriately, reason from first concepts, argue about its objectives, and explain how its activities in the world interact with one another.

These computers can be seen performing these tasks in research centres and laboratories. In order to effectively account for the ambiguity or imprecision of the natural world, Garrido (2010) claims that the idea of sets, relations, and other ideas must be transformed by including logical concepts and procedures. Additionally, the potential for the abstraction of propositional logic is constrained because it does not permit the conduct of reasoning over variables and functions with generic and dynamic content. It implies that the early logical computing systems were likewise incapable of resolving issues whose solutions are included within the vector spaces whose subspaces the propositional space belongs to. First-order logic, a formal logical system that incorporates variables and enables abstraction, as a result, helped to solve this issue.

As a result, decision-making results in behaviors depending on the information and comprehension that the agent has been given. A portion of the risk associated with a decision is transmitted to the inputs of the decision-making agent if it directly affects the environment. In the area of artificial intelligence, the connection has received extensive research. Studies already conducted generally focus on constructing an AIs reasoning frameworks using classical logic, or at least parts of it. However, first-order and propositional logic provides the fundamental knowledge for the continued development of AI systems.

References

Garrido, A. (2010). Logical foundations of artificial intelligence. Brain: Broad Research in Artificial Intelligence and Neuroscience, 1(2), 149-152.

Kakas, A. C., Mancarella, P., & Toni, F. (2017). . Studia Logica, 106(2), 237279. Web.

Neapolitan, R. E. & Jiang, X. (2018). Artificial intelligence with an introduction to machine learning. CRC Press.

Posted in AI

Artificial Intelligence in Soil Health Monitoring

Introduction

Artificial intelligences versatility is one aspect that makes the technology practically universal when it comes to industrial applications and process improvements. However, the most admirable aspect of the technology is how it transforms industries that may not seem to be the ideal candidates. This paper presents AI integration into farming to monitor soil health and production-friendly indicators. The primary direction in this analysis is that artificial intelligence can promote multidimensional soil data integration into an agro-industrial system that guides decision-making on crop rotation.

Discussion

Machine learning algorithms for automated farm monitoring and soil data processing can catapult intensive food-based agricultural production to end global hunger. Deorankar and Rohankar (2020) detailed that an AI system in soil test-based fertility management can effectively increase farm productivity, especially for soils characterized by high special variability. The fertility management technique entails remote sensing capabilities for detecting or estimating soil quality indicators (Diaz-Gonzalez et al., 2022). The automated soil-testing approach complements knowledge of existing crop yield prediction systems that use soil data such as biological, physical, and chemical composition (Diaz-Gonzales et al., 2022). Therefore, the new value provided by AI technology is that it allows automation and algorithm-based predictions for more solid decision-making.

Any innovative technology that serves human needs should be capable of adding value by saving money or improving work efficiency. AI in soil health monitoring is an unconventional application of the technology, albeit capable of adding numerous benefits to farmers and consumers. One value-addition of test-based fertility management is that, as production increases, food prices come down. According to Deorankar and Rohankar (2020), agriculture-dependent nations will benefit from AI-led soil diversity, which allows farmers to maintain year-round production efficiencies. The implication is that such nations can gain comparative trade advantages by providing quality food varieties in global markets.

Conclusion

In conclusion, soil health monitoring became an ideal candidate for AI technology once recent studies showed future value-based opportunities in farming. The possible benefits of AI technology in test-based fertility management are production efficiency improvements and lowered food costs. The technology is likely to get a friendly reaction from industry stakeholders, given that the production technique can improve crop yields and food production for animals. Therefore, farmers should embrace automation and algorithm-based predictions for more solid production decision-making.

References

Diaz-Gonzalez, F. A., Vuelvas, J., Correa, C. A., Vallejo, V. E., & Patino, D. (2022). . Ecological Indicators, 135, 108517. Web.

Deorankar, A. V., & Rohankar, A. (2020). . JETIR, 7(1), 1-4. Web.

Posted in AI

Artificial Intelligence as an Agent of Change

Artificial intelligence (AI) can change civilization in the next several years and increase machine autonomy in medicine, art, energy, space, and education. The most significant AI impact can be noticed in technology spheres such as the solar and wind industries. Besides, the influence of AI technologies will grow in fields associated with human intellect and consciousness, such as law and justice.

Technology is Changing the Power Sector

Soon, AI-based robotics will become more common for remote inspection and monitoring wind turbines and solar panels. Robots can detect defects in materials, independently delivering them to build solar and wind generators (Froese, 2017). In addition, robots based on artificial intelligence and machine learning will be able to collect, analyze data and resolve problems promptly. Ultimately, AI will help energy companies bring low-cost renewable energy to market safely, and customers will use it sustainably.

Today, AI has already reformed many engineering tasks, such as the economical delivery of goods, load planning, generation optimization, programmed power flows, and others. This trend is planned to increase, and by 2024 the global use of AI in the energy industry will reach $7.78 billion (Ahmad et al., 2022). Today, large companies, as well as numerous start-ups, are investing in research into the possibilities of AI.

The Role of Right Data

The energy sector around the world faces challenges such as changing supply and demand conditions, as well as a need for analytical data for optimal and efficient management. Installing more sensors, increasing the availability of easier-to-use machine learning tools, and continuously expanding monitoring, processing, and data analytics capabilities will create new revolutionary business models in the energy industry (Froese, 2017). In developed countries, the electric power industry has used AI to connect to smart meters, smart grids, and the Internet of things devices (Makala & Bakovic, 2020). These AI technologies will lead to greater efficiency, energy management, transparency, and the use of renewable energy.

Modern, highly efficient, accurate, and automated AI-based technologies, such as energy management systems, smart substations, and monitoring, tracking, and communication systems, help collect data on power system equipment and control consumption. This information is necessary to create reliable and efficient power supplies, which are the primary global requirement for environmental protection.

Artificial Intelligence and Law

Law and order can also be an area for implementing artificial intelligence systems. Just as the energy industry requires accurate decisions based on the analysis of big data, so the legal system involves the study of large amounts of information to make decisions (Zeleznikow, 2017). The scope of AI-based systems can be civil cases and cases of minor offenses. To change the judicial system, AI can examine such data as similar cases to make a decision based on precedents. The AI can also draw up formulas based on the civil code to decide the offenders punishment (Kowert, 2017). Introducing machine learning based on legal databases will help create innovative approaches to support decision-making.

Conclusion

Artificial intelligence can be helpful in areas where big data processing and accurate decisions are required. AI has already changed the energy industry by processing and analyzing information from smart meters and smart grids, enabling management decisions to be made quickly and efficiently. Introducing more robots and AI systems will provide the impetus for a new technological revolution. AI will change not only the industry but also the social sciences. Thus, AI can be introduced into the areas of law and courts. These areas also require extensive data analysis to make complex decisions.

References

Ahmad, T., Zhu, H., Zhang, D., Tariq, R., Bassam, A., Ullah, F., & Alshamrani, S. S. (2022). . Energy Reports, 8, 334-361. Web.

Froese, M., (2017). . Windpower Engineering & Development. Web.

Kowert, W. (2017). The foreseeability of humanartificial intelligence interactions. Texas Law Review, 96(1), 181-204.

Makala, B., & Bakovic, T. (2020). Artificial intelligence in the power sector. Int Financ Corp. Web.

Zeleznikow, J. (2017). . International Journal for Court Administration, 8(2), 30-45. Web.

Posted in AI

The Aspects of the Artificial Intelligence

Introduction

The goal of artificial intelligence (AI), a subfield of computer science and engineering, is to build intelligent machines that can think and learn similarly to people. AI is made to do things like comprehend spoken language, identify sounds and images, make judgments, and solve issues (Jackson, 2019). These intelligent machines can carry out activities that would otherwise require human intelligence since they are constructed utilizing algorithms, data, and models. Self-driving cars, virtual assistants, and intelligent robotics are a few instances of using technology. AI research aims to develop tools that can carry out operations like speech recognition, decision-making, and language translation that ordinarily need human intelligence.

Discussion

Artificial intelligence has the potential to significantly improve a wide range of fields and facets of daily life. Increased productivity, better decision-making, personalization, advances in healthcare, security, accuracy, and speed, advancements in academic experiments, and more precise weather forecasting and natural disaster prediction are just a few of the significant advantages of AI (Davenport, 2018). AI systems can also help with tasks like audio and picture identification and natural language processing.

On the other hand, it is crucial to take into account the potential risks and drawbacks of AI, including the likelihood of unexpected effects, employment displacement, and privacy concerns. Because of this, it is critical to have ethical standards and rules in place so that we can minimize these drawbacks and still benefit from this potent technology. Although AI has a wide range of possible benefits, it is crucial to employ it ethically and with regard to its potential effects on society (Cheatham et al., 2019). Artificial intelligence is a formidable technology with the potential to assist society greatly while also posing a number of issues and difficulties.

Since AI systems may automate many operations that were previously performed by people, job displacement is one of the critical issues. Furthermore, biases present in the data that AI systems are trained on may be perpetuated and amplified, which may result in biased outputs (Cheatham et al., 2019). The potential for AI systems to be utilized in ways that are detrimental to society, such as in the design of autonomous weapons or surveillance systems that violate peoples right to privacy, is another issue.

AI systems may malfunction and produce unexpected results in terms of safety. The economic and societal effects of AI must also be taken into account, especially with regard to concerns like wealth inequality and access to opportunities. Additionally, there is a concern known as the Singularity that knowledgeable AI could surpass humans in intelligence and power; this idea is still speculative and not fully grasped (Cheatham et al., 2019). To reduce any potential problems that could result from such circumstances, it is crucial that ethical standards and laws for AI be placed in place.

Conclusion

In conclusion, Artificial Intelligence is a powerful technology that has the potential to bring many benefits to society. The key is to strike a balance between the benefits and risks and mitigate the downsides by using AI responsibly and putting safeguards in place to ensure that the technology is used in ways that benefit society and do not harm individuals or groups. This includes ethical guidelines, regulations, and transparent, inclusive, and responsible development and deployment of AI. It is essential to have a continuous monitoring and feedback mechanism to address any concerns that arise as AI becomes more advanced and integrated into different areas of peoples lives.

References

Cheatham, B., Javanmardian, K., & Samandari, H. (2019). Confronting the risks of artificial intelligence. McKinsey Quarterly, 2, 38. Web.

Davenport, T. H. (2018). The AI advantage: How to put the artificial intelligence revolution to work. MIT Press.

Jackson, P. C. (2019). Introduction to artificial intelligence. Courier Dover Publications.

Posted in AI

Benefits and Repercussions of Artificial Intelligence

Introduction

Artificial intelligence (AI) is a technology field that develops rapidly and spreads across various fields. These disciplines include education, robotics, gaming, marketing, stocks, law, science, and medicine (Tahan, 2019). Indeed, AI became popular due to the fact that electricity costs dropped and computer power increased substantially, enabling its widespread use (Huang & Rust, 2021). Furthermore, machine learning models and algorithms have become significantly more advanced, enabling AI applications in more complex areas of human life (Huang & Rust, 2021). Different types of AI are known, including mechanical, thinking, and feeling programs and tools (Huang & Rust, 2021). Although it brought benefits to people, some risks of AI should also be discussed to ensure that some ethical and technical issues are considered and resolved. The main advantages of AI implementation are higher precision of performed work and more free time for humans, while the possible repercussions are an increase in the unemployment rate and malicious use of private data.

Discussion

Since any AI program is software that can be trained to perform better over time, its accuracy can attain higher levels in some tasks compared to human results. Furthermore, since computers can perform calculations at a much faster rate, the speed of the work may increase tremendously. Some AI tools are already being tested for robot-assisted surgeries and virtual reality practice for doctors. Virtual reality programs sometimes help people with psychiatric issues like post-traumatic stress disorder (Briganti & Le Moine, 2020). Moreover, some AI tools have already been approved by the Food and Drug Administration (FDA) to be used in various medical fields. For instance, the Apple Watch 4 can detect atrial fibrillation; thus, it was recommended by the FDA to be used for patients at risk for remote monitoring (Briganti & Le Moine, 2020). Furthermore, various AI software are available nowadays to help pathologists review biopsy samples faster and detect abnormal patterns (Briganti & Le Moine, 2020). Other AI tools that can detect language, imitate human interaction, analyze data, and build predictive models have simplified peoples work and facilitated performance in for-profit companies, think-tank agencies, and scientific institutions.

Despite its apparent benefits, possible risks of using AI should be considered to prevent harm to individuals. One of the potential repercussions is ethical concern about the lack of doctor-patient interaction in cases when AI fully or partially replaced clinicians (Tahan, 2019). Another possible issue is the danger of sensitive data being stolen by malware (Briganti & Le Moine, 2020). This information, which must be stored in specific databases for constant AI improvement, can be used to harm people. Moreover, many physicians doubt the accuracy of novel AI programs since these tools still lack sufficient training; therefore, they cannot replace human physicians in establishing a diagnosis and prescribing treatment (Briganti & Le Moine, 2020). Another repercussion, which is one of the most feared consequences for humanity, is that robots and AI may result in a significant rise in unemployment (Tahan, 2019). Indeed, if software programs are able to perform specific tasks better and faster than people, organizations may start replacing human workers with AI.

Conclusion

In conclusion, artificial intelligence has gained popularity in various areas of peoples lives. Scientific, medical, and business organizations started to benefit from using AI since it significantly improved the precision and increased the speed of the tasks they perform, creating more time for other activities. However, the possible repercussions of AI implementation still exist; thus, they should be addressed and fixed to avoid fatal mistakes.

References

Briganti, G., & Le Moine, O. (2020). . Frontiers in Medicine, 7, 16. Web.

Huang, M. H., & Rust, R. T. (2021). . Journal of the Academy of Marketing Science, 49(1), 3050. Web.

Tahan, M. (2019). Artificial Intelligence applications and psychology: An overview. Neuropsychopharmacologia Hungaria, 21(3), 119126.

Posted in AI

Artificial Intelligence for Recruitment and Selection

Technology and social media significantly impact practically every area of our lives today, including how candidates and employers approach the hiring process. As a result, the hiring process has changed considerably, and it is crucial to comprehend how social media and technology affect it. Social networking and technology allow both employers and candidates to communicate with each other in ways that were not previously feasible.

From the companys standpoint, technology has altered how job listings are distributed and how possible candidates are found. Employers today have more access to a larger pool of candidates than ever before due to the development of internet job boards, recruiting companies, and social media platforms. Automated recruitment tools and applicant tracking systems are used in the initial phases of the hiring process to help employers swiftly sift through applications and locate the best candidates (Gupta & Mishra, 2023). Additionally, social networking has provided companies a chance to interact more closely with prospective employees, learning about their interests, abilities, and beliefs that may not be quickly obvious from a CV or cover letter.

On the candidate side, social networking and technology have produced new ways for job seekers to locate and apply for openings. Applicants can use online job boards and career websites to more efficiently and successfully search for positions that match their skills and interests (Villeda & McCamey, 2019). Candidates can interact with possible employers and professionals in their sector using social networking sites like LinkedIn, which have developed into crucial venues for creating professional networks. Social media may also be used to investigate businesses and find out more about company culture, values, and open positions. Applicants can also utilize social media to highlight their abilities and expertise, giving prospective employers a more thorough understanding of their credentials.

In conclusion, technology and social networking have had a big impact on both companies and candidates during the hiring process. These solutions expand the possibilities for connecting with potential employees and speed up the hiring procedure. In order to successfully navigate this challenging climate, it is crucial for both organizations and applicants to stay up to date on the most recent trends and best practices.

References

Gupta, A., & Mishra, M. (2023). . The Adoption and Effect of Artificial Intelligence on Human Resources Management. Web.

Villeda, M., & McCamey, R. (2019).. International Business Research, 12(3), 66. Web.

Posted in AI

Artificial Intelligence and Machine Learning in Clinical Trials

Introduction

This study will explain how Artificial Intelligence (AI) and Machine Learning (ML) could improve clinical trials in the pharmaceutical industry. The contents of this paper will give details on how the two technologies could help advance the efficacy of clinical trials using data and statistics generated from companies that have adopted them. At the same time, to draw contrasts on the application of AI and ML in the health sector, the limitations of the technologies will also be elucidated to highlight areas of improvement that could be explored for future improvements and integration in clinical practice. To understand these areas of research in detail, in this analysis, emphasis will be made to highlight the role of the technology in addressing poignant challenges associated with clinical trials, such as the recruitment and selection of participants. This area of the investigation will encompass most of the analysis included in the present text but the role of AI in optimizing dosing regimens and improving the design of effective interventions will also be explored as supplementary analyses. Before embarking on these areas of analysis, it is important to understand some of the most significant challenges clinicians experience when completing their trials.

Cost and Time Associated with Clinical Trials

Recruiting patients for clinical trials is marred by challenges relating to the incompletion of tests and rising costs of sustaining volunteers throughout the investigation. This is why pharmaceutical companies pay a lot of money in research and development before they develop and present new drugs to the market (11). The costs associated with developing such drugs could stretch into millions or even billions of dollars, depending on the design and site selection protocol of adhering to clinical research guidelines (10). These findings mean that pharmaceutical companies have to invest many resources in minimizing the time taken to conduct clinical research and present safe drugs to the market. Part of the process involves making sure that a participant who is willing to engage in a clinical research trial is committed to staying in a program for its full length. However, clinicians do not always achieve this objective because of the high failure rates associated with past assessments and the costs of keeping the participants engaged throughout different stages or phases of clinical research (11). Figure 1 below shows the average cost of maintaining a volunteer, across different phases of a typical clinical research trial.

Cost of clinical trials
Figure 1. Cost of clinical trials (Adapted from Roth 11)

According to figure 1 highlighted above, the cost of maintaining the participation of research participants rises across four phases of clinical trial development. On average, the first phase of clinical research could cost pharmaceutical companies up to $15,700 to maintain one participant in the first phase of drug development. In the second stage, this cost could rise further to $19,300, after which it later increases to $26,000 in the third and final stages of the clinical trial. For most drug developers these costs are often prohibitive and inhibit their ability to develop reliable drugs and present them to the market affordably (11). This is why the cost of some drugs is often high and unreachable to patients who need them the most (10). High rates of incomplete clinical trials, reported in some cases, further compound the problem. Figure 2 below shows that researchers fail to complete about 80% of clinical trials fail on time, while another 20% are delayed for more than six months due to reasons attributed to recruitment and retention of patients in these investigations (11).

Challenges associated with completion of clinical trials
Figure 2. Challenges associated with completion of clinical trials (Source: Adapted from the works of Roth 11)

The findings highlighted above show that many pharmaceutical companies are experiencing challenges maintaining a healthy balance between the overall cost of conducting clinical research and the time it takes to complete them. This problem has affected clinicians across different areas of research (10). Consequently, there is a need to find better ways of managing these variables. AI and ML provide opportunities for doing so as highlighted below.

Bolstering Patient Recruitment Efforts

As mentioned above, recruitment is one of the most significant barriers for researchers to undertake successful clinical trials. In one study authored by Woo (15), this challenge manifested in identifying patients who suffered early-stage breast cancer. It was reported that out of the possible 40,000 women in the US who suffered this type of cancer, researchers only managed to recruit 636 patients in five years (15). It was proposed that AI and ML were useful in increasing the number of participants and the duration of identifying and accessing them. Particularly, AI was highlighted as having the power to help researchers to achieve this objective using different types of technologies associated with it (1). For example, Natural Language Processing (NLP)  a technique that identifies written and spoken words to find patients who have been diagnosed with specific conditions  was mentioned as having the ability to identify researchers who could take part in such an investigation within a short time. For example, it could be used to search doctors notes to find cases involving patients with a specific type of disorder. By doing so, clinical trials could be better focused to investigate the efficacy of drugs that are intended for a specific subpopulation.

In line with the above recommendations, A California-based company, known as Cedars-Sinai Smidt Heart Institute, used AI analytics to identify 16 participants suitable for a clinical trial in one hour (15). The use of traditional methods of recruitment could have taken months to achieve the same outcome. This example shows that AI is effective in recruiting relevant participants for a clinical trial within a short time. It does so without compromising the integrity of the research or its participants (5). Mayo Clinic, which is located in Rochester, Minnesota, has also reported similar impressive results in recruiting volunteers for clinical research because they reported an 80% increase in recruitment for participants wishing to take part in breast cancer clinical trials (15). By most standards of measurement, an 80% increase is significant enough to pay attention to the strength of AI in improving the recruitment and retention of research participants.

Cohort Enrichment

Another way that AI and ML may help to improve patient recruitment levels is via cohort enrichment. This could happen when the two technologies help to identify a subset of the population that a clinical trial outcome is best applicable. Broadly, the action means that the technologies are not designed to highlight the effectiveness of treatment options across a population of randomized control trials (1). Instead, they can help to improve the efficacy of trial outcomes by identifying patients that are not suitable for a specific investigation because their involvement would undermine the effectiveness of the trial outcomes (6). Relative to this observation, Woo (15) cautions that, even though AI may help to increase the number of participants that would be involved in a clinical trial, the surge does not guarantee an increased likelihood of success. However, the inclusion of unsuitable candidates in a clinical trial is almost definitely likely to minimize the success of such outcomes (3). AI and ML technologies help to minimize this risk by identifying persons who would undermine the efficacy of the clinical trials and by doing so, help to enrich the outcomes.

In an ideal clinical trial setting, the recruitment of patients should be done using genome patient-specific diagnosis tools where biomarkers that are targeted by a drug are present in a patient that should be the ideal recipient of a drug under development (15). Trials that follow this methodology exist but they are few in percentage compared to the overall number of investigations that are reported in research (2). At the same time, they are more expensive compared to conventional trials, particularly when medical imaging techniques are deployed to identify the right group of participants for an investigation (10). AI and ML help to bridge this gap in the research by identifying patients with unique biomarkers that would be relevant to a specific trial (4). For example, sophisticated AI and ML techniques that are under development have the potential of merging specific Omic data with electronic medical records (EMR) to identify patients or clinical trial participants with specific types of data that would be relevant to a clinical trial (17). This process helps to identify endpoints in clinical research trials that can be sufficiently measured for improved efficacy and outcomes. They improve a researchers ability to identify and characterize specific subpopulations of patients that are suitable for specific trials through AI and ML-based techniques, such as NLP and Optical Character Recognition (OCR) methods (15). The application of these techniques in clinical research trials means that the process of reading and compiling pieces of evidence relating to clinical trials will be fully automated using the technologies.

AI and ML techniques also have the potential of harmonizing EML data because they are commonly scattered and available in different formats owing to their large volumes and velocity (5). The data source-agnostic nature of AI data helps to overcome these barriers, thereby leading to the development of harmonized EMR datasets for comprehensive analysis (8). This process of analysis is important in designing tools for clinical trial enrichment and the discovery of patients with unique biomarkers for specific clinical research. Other benefits associated with AI and ML use that are realizable using this technique include pre-clinical compound discovery and improved techniques for highlighting compounds associated with clinical trial testing (4). Prediction-based AI and ML tools could also help to achieve these outcomes by identifying correlations among patients, biomarkers, and clinical outcome indications. This process has the potential of identifying candidates that have a higher likelihood of success in the implementation of clinical trials as well as those who are unsuitable for a trial before they take part in it (3). In this regard, AI and ML help to improve the selection of patient cohorts for clinical research, thereby enriching cohorts of patients willing to volunteer in an investigation.

Improving Clinical Trial Design

The design of clinical trials is important in understanding the flow of information in a given investigation that would ultimately lead to the collection of reliable data and the development of quality findings. Indeed, as highlighted by Harrer, Shah, and Antony (3), clinical trials often contain protocols that stipulate processes and procedures that researchers should follow when designing clinical trials. Given that most of them rely on a variety of sources to develop their findings, AI and ML could help to analyze such data faster and more efficiently than a human being does, thereby providing a stronger foundation for developing more reliable clinical trial designs (7). For example, the technologies could scan information from relevant clinical journals, drug labels, and information emerging from private pharmaceutical companies, faster and more effectively, thereby making it easier to calibrate information that would be better suited for a treatment design.

By using such information, it is easier to understand how different aspects of a proposed trial could influence the cost and eligibility requirements for a specific case or analysis (16). Broadly, the above findings are important in understanding specific aspects of a clinical research design that affect key success factors of a clinical trial. In this regard, AI and ML emerge as data-driven guides for developing better clinical trial designs compared to conventional means. Therefore, they are useful in improving the clinical trial design, thereby enabling researchers to make improvements in specific areas of research. At the same time, the use of AI and ML in improving protocol design makes it is possible to develop drugs faster and at a cheaper cost than conventional methods do.

Optimizing Dosing Regimens

Clinical trials aimed at improving the efficacy of drugs could also use AI to optimize dosing regimens. This may happen through improved efficacy in assessing issues relating to treatment and drug administration (9). For example, AI and ML have been used to understand the efficacy of combining different types of drugs by tinkering with existing schedules and developing a larger body of literature that estimates their efficacy and safety (13). By optimizing dosing requirements, AI and ML also help to minimize the risk of patients suffering from adverse events relating to clinical trials (10). At the same time, they could help to minimize the incidence of trial delays, which are commonly associated with insufficient dosage requirements (17). Indeed, by analyzing data relating to how different groups of patients respond to specific treatment plans, specific dosages could be better tailored to suit unambiguous demographics to minimize their side effects and improve their overall efficacy.

The role of AI and ML in optimizing dosing regimens has been demonstrated by companies that have used the technique to find the right treatment plans for patients with cancer (15). For example, Zenith Epigenetics used these technologies to find the correct treatment plan for a patient suffering from Prostate cancer. Notably, the company used AI algorithms to find the right dosage for the drug ZEN-3694, which when combined with Enzalutamide, could treat the same condition (15). The AI algorithm used was known as CURATE and the efficacy of the treatment plan was assessed by reviewing the patients clinical data and comparing it with tumor size before and after treatment (15). The findings were more vivid using the two technological tools. Cancer biomarkers in the blood were also assessed on the same platform to understand the efficacy of the treatment plan and the results were used to find the best treatment regimen for the patient (15).

Overall, this example shows how AI and ML could be used to find individualized treatment options for patients suffering from different types of conditions. Their importance is magnified when understanding the best combination of drugs that patients could take to treat a specific condition. Therefore, their use is important in improving the efficacy of existing and new treatment plans, thereby improving the administration of drugs. Doing so will be a departure from traditional eyeballing techniques that are commonly used by physicians worldwide to improve treatment regimens (15). Despite their commendable track record in enhancing this area of clinical research, some challenges have been associated with the adoption of AI and ML.

Limitations of AI and ML

Implementing AI in clinical settings is one of the foremost challenges associated with its adoption. However, given the unstructured nature of doctors notes, it may be prudent to have background information about specific cases associated with patients who manifest specific symptoms or have been diagnosed with specific conditions (12). For example, specialists may describe one condition differently or in multiple ways. For example, some doctors may describe a heart attack as a myocardial infarct, or a myocardial infarction, depending on their training or institutional environment (7). In some cases, the same condition may be described as MI (15). Such discrepancies may limit the ability of ML techniques to generate accurate data to support clinical trials. However, it is possible to address such concerns through improved feedback loops where machine learning is deployed to train AI to detect and correct the effects of such variations on clinical outcomes (9). This possibility leaves room for further application of AI in clinical trials.

Additionally, although machine learning has been proposed as one of the most significant ways of improving AI efficiency, it still requires significant investments in data generation (13). This is a challenge for most data analysts and researchers because they need many hours to manually annotate data that would be used in tests (14). Variations in the management of data across different medical fields and data management processes across medical institutions are still too significant to ignore because they could cause discrepancies in data usage, which may affect clinical trial outcomes (2). In this regard, there is no universal understanding of clinical trials data.

Another challenge associated with the use of AI involves the use of third-party tools to access patient data. Most developers engage third parties to extract and manage data to improve research outcomes (12). This strategy could cause ethical violations stemming from a breach of confidentiality agreements between patients and their medical service providers (12). Particularly, the involvement of third-party actors in data mining poses the biggest challenge in this regard because they are foreign players in the management of doctor-patient relationships. Therefore, there is a need to be cognizant of the effect of including other players in the use of AI and ML in clinical trials especially because different countries and institutions have varied interpretations of this concern.

Conclusion

The findings of this investigation show that AI and ML are useful tools for recruiting the right participants for clinical trials and maintaining their participation throughout their lifecycles. They also help to reduce the cost and time taken to complete such trials because they are more effective and inexpensive to implement compared to traditional methods. However, it is important to be mindful of their limitations because they could cause ethical violations and misinterpretations in data analysis, depending on the institutional policies and environments of various healthcare facilities.

References

Bhatt A. Artificial intelligence in managing clinical trial design and conduct: man and machine still on the learning curve? Perspectives in Clinical Research. 2021; 12(1): 13.

Blease C, Locher C, Leon-Carlyle M, Doraiswamy M. Artificial intelligence and the future of psychiatry: qualitative findings from a global physician survey. Digital Health. 2020; (6)1: 234-244.

Harrer S, Shah P, Antony B, Hu J. Artificial intelligence for clinical trial design. Trends in Pharmacological Sciences. 2019 ; 40(8): 577-591.

Fonseka TM, Bhat V, Kennedy SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Australian and New Zealand. Journal of Psychiatry. 2019; 53(10): 954-964.

Kerr D, Klonoff DC. Digital diabetes data and artificial intelligence: a time for humility, not hubris. Journal of Diabetes Science and Technology. 2019; 13(1):123-127.

Lai KHA, Ma SK. Sensitivity and specificity of artificial intelligence with Microsoft Azure in detecting pneumothorax in the emergency department: a pilot study. Hong Kong Journal of Emergency Medicine. 2020; 17(2): 151-162.

Mathur P, Srivastava S, Xu X, Mehta JL. Artificial intelligence, machine learning, and cardiovascular disease. Clinical Medicine Insights. 2020; 14(1): 112-119.

Rahman MM, Khatun F, Uzzaman A, Sami SI, Bhuiyan MA, Kiong TS. A comprehensive study of artificial intelligence and machine learning approaches in confronting the Coronavirus (COVID-19) pandemic. International Journal of Health Services. 2021; 51(4): 446461.

Randhawa GK, Jackson M. The role of artificial intelligence in learning and professional development for healthcare professionals. Healthcare Management Forum. 2020; 33(1): 19-24.

Ranschaert ER, Morozov S, Algra PR. Artificial intelligence in medical imaging: opportunities, applications, and risks. Springer; 2019.

Roth C. [Internet]. Buffalo, NY: Praxis. 2017.

Samuel G, Chubb J, Derrick G. Boundaries between research ethics and ethical research use in artificial intelligence health research. Journal of Empirical Research on Human Research Ethics. 2021; 16(3): 325-337.

Shinners L, Aggar C, Grace S, Smith S. Exploring healthcare professionals understanding and experiences of artificial intelligence technology used in the delivery of healthcare: an integrative review. Health Informatics Journal. 2020; 26(2): 1225-1236.

Wang C, Zhu X, Hong JC, Zheng D. Artificial intelligence in radiotherapy treatment planning: present and future. Technology in Cancer Research and Treatment. 2019; 18(1): 651-667.

Woo M. Trial by artificial intelligence: a combination of big data and machine-learning algorithms could help to accelerate clinical testing. Nature. 2019; 573(26): 100-102.

Wood EA, Ange BL, Miller DD. Are we ready to integrate artificial intelligence literacy into the medical school curriculum: students and faculty survey. Journal of Medical Education and Curricular Development. 2021; 8(1): 424-447.

Zahren C, Harvey S, Weekes L, Bradshaw C, Butala R, Andrews, J. Clinical trials site recruitment optimization: guidance from clinical trials: impact and quality. Clinical Trials. 2021; 18(5): 594-605.

Posted in AI

Robots in Todays Society: Artificial Intelligence

Introduction

The role of robots and cyber technologies can not be overestimated. The fact is that they have become an integral part of technological progress and the development of technologies in numerous spheres of everyday life. Originally, robots are used for several key aims. The most important is the automation of the repeating process, to liberate human power, and avoid mistakes and delays in the processes. As for the role of cyber technologies in society, it should be stated that we come across robots every day. We buy coffee and sweets in vending machines, we fuel our vehicles in automatic fuel stations, use ATMs. Robots are used in the medical sphere: they control health conditions, regulate medicine consumption, and perform various analyses. Robots are used for entertainment: they play music, control the lights, play soccer, climb walls, dance, etc. The issues of Artificial intelligence entail such factors as history, philosophy of the cyber mind, ethical considerations, and others.

Overview

The issues of artificial intelligence and cyber life have been capturing the imagination of humanity since ancient times. Originally, people were aiming to create assistants, which would perform the hard, dirty, and dangerous work. The very definition of artificial intelligence presupposes the assessment of the environment and performing the sequence of actions, that would maximize the likelihood of success. Thus, it should be able to analyze the environment, generate the suitable idea, and perform it. For these principles to come true in cyber engineering, the scientists should solve the key assignment of the intelligence: the mechanism should be capable for self-development. Thus, the scientists are challenging the enigma of the entire universe.

As for the future of the artificial intelligence, and the matters, which have been described in science fiction novels, it should be emphasized that AI in general is neither negative nor positive. In general, it depends on the aims and purposes, which are pursued by the creation of the AI, and cyber life. Another issue, is whether the humanity will treat the self developing machine positively or negatively. On the one hand, negative treatment will make it hostile towards humanity, on the other hand, machine can not be treated equally with other people. Moreover, they are created as servants, and assistants, who will be able to sacrifice their cyber lives for humans, consequently, the rules of AI, which were formulated by Isaac Asimov will be an important principle, which the cyber self-development should be based upon:

  • A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its existence as long as such protection does not conflict with the First or Second Laws. (In Danielson, 2005)

History of Artificial Intelligence

The first witnesses of the artificial intelligence and cyber inventions were found in Ancient Greece, where the first simple robots, with essentially restricted opportunities were created. The fact is that, the existence of an obedient and powerful assistant have always captured the imagination of dreamers. Thus, in mythology gods had such assistants, who were the exact biological copies of humans, nevertheless, they were featured with extra opportunities, immense power and wit mind. Nevertheless, the real attempts to create an AI were not so successful. As it is emphasized in OLeary and OLeary (2008, p. 496):

Mechanical or formal reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turings theory of computation suggested that a machine, by shuffling symbols as simple as 0 and 1, could simulate any conceivable act of mathematical deduction. This, along with recent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.

In the light of this statement, it should be claimed that the concept of an electronic brain was regarded by numerous researchers, and the main principle of all the researches was based on the binary calculation system. Nevertheless, as it is stated by Lyon (2007) some researchers believe that this system is too primitive for being the basis of AI, and the entire logic of finding truth or false values is primitive, and higher logical considerations has not been achieved by our civilization yet.

The most interesting and important period of the history in the sphere of artificial intelligence is the XX century. The development of computer technologies and appearance of the programming languages have made the achievement of the artificial intelligence heights more reachable. Electronics and cybernetics have become another effective tool for creating the artificial intelligence, and robots became able to perform simple operations, calculations, and analysis of the input data. Thus as it is stated in Brahm and Driscoll (2005, p. 67) in 1952-1962 Arthur Samuel (IBM) created the first game-playing software, for checkers, to achieve sufficient skill to challenge a world champion. Samuels machine learning programs were responsible for the high performance of the checkers player. This was the essential and very important step forward, nevertheless, the new challenge appeared: the universal artificial intelligence required the universal software, with distinctly defined algorithms of analyzing the surrounding information, the algorithms and principles of selecting the required information, flows of processing, storing and deriving the required information. Nevertheless, it should be emphasized that this is only one side of the coin, as the computational powers and data storage devices were far from the required characteristics.

Nowadays, the artificial intelligence is still developing, and robots can define the voice tones, catch the mimics of the collocutors, gather data in accordance to numerous parameters, collect the required information, store it, and derive in the case of necessity. Nevertheless, the self-studying machines are still within the sphere of science fiction, and, as it was emphasized by Geyer and Van Der Zouwen (2001, p. 156), the further development of AI, will require the development of biotechnologies, instead of electronics.

Philosophy of AI

Originally, this part of the paper may be regarded as the continuation of the historical part, as philosophy of the artificial intelligence has been developing for centuries. Philosophy of the artificial intelligence is the inevitable part of technical aspect of development, as the synthesis of philosophical approaches and technical development of the cybernetics will be able to originate the appearance of the universal machine, involving all the required aspects of moral, technical, and mental development. Thus, philosophy is aiming to find the replies for the questions, entailing, what capabilities of human mind the machine should be characterized with, what are the limits of machine intelligence, what are essential and unbreakable differences between human and machine intelligence, and lots of others. Originally, numerous thinkers have tried to find the answers, and the key concepts of machine intelligence are evaluated from the position of human mind development. The key philosophical concept, by Crosson (2007, p. 45), is the Turings polite convention. By it, the machine is not able to act as politely as a human, and the behavior of any machine may be evaluated only by the technical capabilities of the machine. Additionally, the Dartmouth proposal exists, which is aimed to claim that every aspect of learning, or any other aspect, which features intelligence, may be achieved by machines and simulated, if described in details. Thus, the polite behavior may be taught. In the light of this statement Searles strong AI hypothesis should be emphasized:

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in the same sense human beings have minds. Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the mind might be. (Crosson, 2007, p. 187)

Additionally to this concept, the artificial brain argument should be emphasized. Originally, it is stated that human brain can be simulated. By the statement by Crosson (2007, p. 79), the contents of the brain may be copied directly into the hardware storage, thus, all the information, experience and analysis algorithms will be available to any machine. Thus, the behavior of the cyber organisms will be identical to human behavior, they will be able to learn, to study, to feel, to analyze, to interpret, and experience all the emotions like humans.

On the one hand, all these philosophical concepts are quite real (from the perspective of philosophical concepts of human behavior, and the attitude towards artificial intelligence) and correct, on the other hand, they are barely achievable practically, as the real values of the human life and the human mind is the individuality. Everyone is individual, and if some particular human features are attributable to machines, these will be the cyber clones of the humanity. The ethical aspects of these values will be discussed in the following chapter, nevertheless, it should be emphasized that the opportunity of creating an artificial intelligence requires the deeper and wider development of the logical elements, computational powers, storage volumes and data collection equipment. The attribution of the human features is the thing of the further step of technological development.

Ethical Issues

The ethics of cloning, described in the previous part, is close to unauthorized access to human memory and manipulation with the human mind, which is unethical. Anyway, the creation of the artificial intelligence will be ethical only if it is used for making good, but not for the military aims, or for making harm to people. Considering the aspects of creating AI from the perspective of humanity of the machines, it should be stated that independently on the capabilities and skills of the machines and robots, humanity will never regard them as the full fledged neighbors on the planet. Thus, if AI will be identical to human minds, racial war is inevitable.

On the other hand, if robots with AI will be created for the particular aim, they will be professionals in any particular sphere, thus, there will be no place for humanity on the planet. People will inevitably degrade as a civilization, as they will not be required to think, analyze, evaluate, etc., as these tasks will be performed by robots. Another variant of history development is the realization by robots that humans are the weak creations, and the world can exist without humanity. Thus, too self-assured humanity will be destroyed by those, who were aimed to help.

Nevertheless, considering the realities of cyber science, robots and intelligent mechanisms are created with the only aim to help.

Robots in Society. Pros and Cons of Artificial Intelligence

Then you dont remember a world without robots. To you, a robot is a robot. Gears and metal; electricity and positrons. Mind and iron! Human-made! If necessary, human-destroyed! But you havent worked with them, so you dont know them. Theyre a cleaner better breed than we are. (From I, Robot by Isaac Asimov). Originally, this abstract may be the prologue for the discussion, whether robots should have their place in the human society. On the one hand, robot are the obedient servants, which perform the tasks, provided by people. They are working instead of humans, performing tasks which can not be performed by people. On the other hand, people are aiming to expand the variety of tasks, performed by robots, and try to develop the more complicated intelligence, for robots could think, collect and analyze the received information. The top of AI development will be the self developing machine, nevertheless, the consequences of such progress are unpredictable. This machine may either become mighty partner of the humanity, or the mighty enemy, which will not tolerate the presence of humanity on this planet. Despite the fact, that this moment is far, and the Artificial Intelligence has not reached even the basic levels of self development, the developers should think over the moral and ethic issues of the artificial intelligence development.

Conclusion

Finally, it should be emphasized that the role of the robots in the human society is clear. Originally, these are the obedient servants of the human civilization, and they are friendly partners of people, as they perform works, which may be dangerous, or even impossible for people. Nevertheless, the important aspect of cyber technologies development  the Artificial Intelligence should be thoroughly discussed by the developers, as the machine revolt themes have been raised in science fiction, and servants often appeared the mighty enemies, which aimed to destroy the humanity and the entire civilization.

Nevertheless, by the reality of cybernetics, robots are created as the assistants, which can sacrifice their electronic lives for the sake of human safety. Robots can maintain life, controlling and regulating life important processes, take the place of a lost extremities, etc.

Reference

Brahm, G. & Driscoll, M. (Eds.). (2005). Prosthetic Territories: Politics and Hypertechnologies. Boulder, CO: Westview Press.

Crosson, F. J. & Sayre, K. M. (Eds.). (2007). Philosophy and Cybernetics: Essays Delivered to the Philosophic Institute for Artificial Intelligence at the University of Notre Dame. Notre Dame: University of Notre Dame Press.

Danielson, P. (2005). Artificial Morality: Virtuous Robots for Virtual Games. New York: Routledge.

Geyer, F. & Van Der Zouwen, J. (Eds.). (2001). Sociocybernetics: Complexity, Autopoiesis, and Observation of Social Systems. Westport, CT: Greenwood Press.

OLeary T., J. & OLeary L., I.,(2008). Computing Essentials. McGraw-Hill.

Lyon, D. (2007). The Silicon Society. Grand Rapids, MI: Lion.

Posted in AI