How Artificial Intelligence Trained To Analyze Causation

When something unexpected happens, it is our intuition to ask questions and try to understand why it has happened. If we can determine the reason of that unexpected event, it may be possible for us to prevent such an outcome next time. However, the ways we, humans try to understand, and reason things are sometimes superstitious, and we cannot explain what is really going on. Correlation which can only state that an event happened around the same time as another event can’t provide reason too. To know in depth what is the real reason behind an event, it is a necessity to look closer at causality. We need to look closer at how information flows between two events. It is the information flow that shows that there is a causal relation between them. General causality is necessary to identify the causes.

Mathematical models have so far been able to work two causes at most. Therefore, the models that are created for general causality have been very restricted. Now, with an artificial leap forward, developers have created the first capable model for general causality which would work without the provided time-sequenced data and it would be able to identify multiple causal connections. The model’s name is Multivariate Additive Noise Model, MANM in short.

Two researchers, Prof. Tshilidzi Marwala from the University of Johannesburg and Dr. Pramod Kumar Parida from National Institute of Technology Rourkela have created a new model and tested the model on simulations and real-world datasets. Findings are published in the journal, Neural Networks.

People can sometimes think there is a connection between two events because of superstition. For example, in some cultures, seeing a black cat and connecting this with something bad happened afterward. However, from an artificial intelligence point of view, we state that there are no causal links between the cat and what happened afterwards. The cat was merely seen just before the second event but there is no correlation with what happened later.

A good example to understand how this new program works is looking at the example of financial situations. For example, a household that may be in debt. Such a financial situation can cause acute restrictions on the household, eventually becoming a void like situation from which there is no likely escape of. But do the people of that household understand the causal connections between what happens to them?

One of the researchers, Dr. Pramod Kumar Parida said that “The reasons of long-time continuous household debt are a good example to what the new model is capable of doing. At a household level, questions are: did the household lose some or all its income? Are some or all members of that household spend more than their income? Has something bad happened to household members that forcing them to spend hugely, as an example, medical concerns? Are they using their savings and investments, which have now finished? Are all of these things happening, if so, which are the more dominantly causes the debt? If enough information about the household’s financial transactions are provided, all given with date and time information, the causal connections between household’s income, spending, savings, investments, and debt are possible for someone to figure out.”

Parida continues, ‘What are the reasons people in a city, or a region are struggling financially? Why are they not capable of getting out of debt? Now, it is no longer possible for professionals to figure this out from available data. Particularly, if we want the causal connections on household’s income, spending, savings and debt for the city or region, rather than professionals’ insights. Here, causality theory fails to answer, because the financial transaction data of the households in the city or region will be unfinished. Also, date and time information are missing from collected data. Financial endeavor in the low, middle- and high-income households are likely very varied, so we would want to see the different causes from the analysis.”

He continues by saying, ‘With this model, we can recognize multiple essential driving forces causing the household debt. In this model, these factors are called the independent parent causal connections. We can also see which causal connections are more dominant than the others. Through a repeated second trial with the data, the minor driving factors are detected, and those minor factors are called the independent child causal connections. In this way, it is possible to identify a possible pecking order of the causal connections.’

Artificial intelligence and the coming health revolution

Artificial intelligence is very promising in health care due to being able the efficiently and effectively analyze data from apps, smartphones, and other wearable technology. Bots could very well replace doctors. And bots or automated programs are highly likely to play a very important role in finding cures to some of the hardest diseases and conditions. Therefore, AI is rapidly moving into healthcare systems, led by tech giants and emerging startups.

Cautious optimism

“A lot of excitement around these tools,” said Lynda Chin, vice chancellor, and chief innovation officer at the University of Texas and continued “but technology alone is unlikely to translate into wide-scale health benefits. Data from sources are different than medical records and it is difficult to access them due to privacy and other regulations. Integration of data in health care delivery where doctors may not know how to use new systems.”

Computers are starting to reason like humans

We, humans, are generally good at relational reasoning. Relational reasoning is a kind of thinking that uses logic to connect and compare, places, sequences, and other entities. Relational reasoning is an important part of higher thought. It is what is difficult for artificial intelligence to master. There are two main types of AI, namely, statistical and symbolic, those have been slow to develop similar capabilities in relational reasoning. Statistical AI, or “machine learning” as it is called is great at pattern recognition but not using logic. On the other hand, symbolic AI can make reasonings on relationships between entities with using predetermined rules, but it lacks learning on the go. However, researchers at Google’s DeepMind division have developed a new algorithm to overcome relational reasoning problem and it has already had edge over humans at a complicated image perception simulation. The DeepMind division had also tried its neural network on a language-based task.

A new technique is promising for a way to make a connection in between and understanding relational reasonings with an artificial neural network. Like the way our neurons are work in the brain, neural nets make connections of tiny algorithms that work in cooperation to find patterns in data. They can have specialized architectures for processing images, languages and games. In this case, the network is forced to discover the relationships that exist between entities.

Conclusions

By this paper, we wanted to show how the world could be changed if we could solve this the long-term problem at once and forever, we assumed that it can be done by AI, which will be able to distinguish the difference between correlation and causation even in the hardest examples. By understanding the basics finally with extending our knowledge about AI we can write the reasoning machine that had one possibility to solve those problems for us wit very high efficiency, we stated some important facts that should have AI to do that.

Furthermore, this could solve the most important problems for humanity. We provided many health, psychological and statistical examples. The world we live now in will be completely different from what it is right now because we don’t know too many things that could help us to build the world around us, because of many reasons, such as too big statistical data, not a full understanding of causation, mistakes in making conclusions etc.

That’s what we wanted to accomplish to find the way to change something, so we could live better, our descendants too. AI is a very powerful tool, that could be used for understanding many things and causation & correlation is one of the most difficult and broad examples.

Diabetes Risk Prediction Using Machine Learning

Abstract

With changing lifestyle and food habits like lack of proper sleep, exercise, bad eating habits, etc have led to rapid increase in the number of people having diabetes hence, its necessary to decrease it. The proposed system developed will predict the risk of a person getting diabetes and classify it into one of the three categories namely low, medium and high. Depending on the risk level a diet plan or a nearby diabetologist will be suggested. The user’s risk level will be provided based on the lifestyle parameters thereby avoiding complex medical jargons. The main advantage of this proposed system is its simplicity, ease of use and easy access. The system uses random forest, a supervised machine learning algorithm for classifying the person into the appropriate risk category based on their inputs that are expert approved lifestyle parameters. The accuracy of the proposed system using the random forest algorithm is 88%. The proposed system allows the users to understand and analyze their lifestyle habits and encourages them to adopt a better and active lifestyle and good eating habits according to their risk category. So, this system effectively contributes in creating a healthy society on the whole.

Introduction

Diabetes is an extensively growing disorder among people nowadays because of their unhealthy lifestyle and imbalanced nutrition, hence finding a solution for its prevention at early stages and spreading awareness about it has become an absolute necessity. The age group of people getting affected by diabetes is increasing every day That is why diabetes risk prediction has become the need of the hour. The diabetes risk predictor will help the user to know his or her risk level. By knowing their risk level, the users can take various preventive measure before diabetes actually hits them. The proposed system hence plays a vital role in keeping the masses educated and prudent.

These days a few systems to calculate the risk of diabetes have surfaced online. Named as the “Diabetes Risk Calculator” they calculate the risk of a person getting diabetes and also provide trivia based on diabetes. In most cases of such systems, machine learning algorithms aren’t applied and hence risk is predicted according to a given range of set values of a few parameters. Hence, the accuracy of the risk calculated is at stake and not so reliable. Some other systems developed included some technical parameters that the user cannot enter without medical help which also affects the prediction’s accuracy and also makes it difficult for the users to use it hence making it less economical and user-friendly.

Drawing inspiration from these systems as well as taking their drawbacks into account the proposed system will be able to calculate the risk using machine learning algorithm called random forest which will improve the accuracy of the system as well make it more reliable. Apart from giving the risk classification the proposed system will also be able to give diet suggestions to the user as well as a list of nearby diabetologists based on the user location. Hence the proposed system to be developed will be a combination of all the pros of the previous systems and also an improvisation on them. This way an effective system to predict the risk of diabetes can provided to the society.

Training and Testing

First step to train and test data was to decide on a programming language that was decided as python and a platform where in the training and testing will be done for this purpose Jupyter Notebook using Anaconda was selected.

Next step will be accessing the collected dataset. Panda library will be used in order to import and read data. The data file imported by Pandas is in .csv format. Through Pandas we used its data cleaning features such as filling, replacing or imputing null values. The (pd.read_csv) reads the csv format, a comma-separated values (csv) file into DataFrame. display(data.head()) previews data.

The next thing that will be done is encoding the data into labels using Label Encoder. Since the data collected was in string format it cannot be processed or transformed without converting the string values to numeric values. Hence, the Label Encoder encodes this string data into numeric data according to the alphabetical order of the inputs column wise.

After this splitting of dataset into training and testing data will be done. The training data consists of a known output and the model will learn using this data in order to be generalized to other data afterwards. We have the test dataset in order to test our model’s prediction.

The test set should be big enough to get proper results and represent the data set as a whole. The main aim of this is to generate a model that generalizes and classifies new data well. The test set will represent a proxy for new data. This model does about as well on the test data as it does on the training data. The SciKit library will be used to divide the data via Model Selection library, a tool, that has a ‘train_test_split’ class. Using this the dataset is split into training and testing datasets in 70-30 parts.

The dataset was split into two different datasets, one for the independent features – X, and one for the dependent variable – y that is the risk class. Further the dataset X is split into two separate sets – X_train and X_test. Further we’ll split the dataset y into two sets as well – y_train and y_test.

Performance Comparison

This data collected was trained and tested using three different algorithms after the splitting of the data into test and training data. The three algorithms were tested to see which algorithm gives the best performance which was measured in terms of the algorithm giving the highest accuracy. The three algorithms whose performances were compared are K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest:

Methodology

The system proposed includes the making of a web application through which the user will interact with the machine learning model. In this, the user uses the web application to input the thirteen basic lifestyle parameters like gender, age, calorie intake, heredity, smoking, alcohol consumption, mental issues, daily physical activity, sleeping pattern, blood pressure, pcos and dark skin patches through a form. The values entered by the user via the form input

method will then be taken to the already trained model and the trained and tested machine learning model will give the risk level according to the parameter values inputted. This risk calculated will be then displayed on the web application according to which the user can take the necessary steps like going to a nearby diabetologists or working on the parameters that increase the risk of getting diabetes.

Conclusion

The system developed will be able to predict the risk of a person getting diabetes before its onset thereby encouraging people to adopt a healthier and more active lifestyle. Its ease of use is another factor which enables people to make full use of it. This system is in its nascent stage at this point. It has a lot of scope for improvement in the near future. This system can be made more accurate by collecting more dataset and can be extended to predict the risk of type I diabetes as well as gestational diabetes. The diet suggestion can also be customized according to each user’s individual habits.

References

  1. G. K Sowjanya, Dr. Ayush Singhal, Chailtali Choudhary, “MobDBTest: A machine learning based system for predicting diabetes risk using mobile devices”, IEEE International Advance Computing Conference (IACC), 2015
  2. Roxana Mirshahvald, Nastaran Asadi Zanjani, “Diabetes prediction using Ensemble Perceptron Algorithm”, 9th International Conference on Computational Intelligence and Communication Networks, 2017
  3. Prof. Dhomse Kanchan B, Mr. Mahale Kishor M., “Study of Machine Learning Algorithms for Special Disease Prediction using Principal of Component Analysis”, International Conference on Global Trends in Signal Processing, Information Computing and Communication, December 2016
  4. Raid M.Khalil, Adel Al-Jumaily, “Machine learning based prediction of depression among type 2 diabetic patients”, 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)978, November 2017
  5. Ayush Anand, Divya Shakti, “Prediction Of Diabetes Based On Personal Lifestyle Indicators”, 1st International Conference on Next Generation Computing Technologies (NGCT-2015) Dehradun, India, 4-5 September 2015
  6. Md. Aminul Islam, Nusrat Jahan, “Predictions of Onset Diabetes using Machine Learning Techniques”, International Journal of Computer Applications(0975-8887), Vol 180- No.5, December 2017
  7. http://www.diabetes.org/are-you-at-risk/diabetes-risk-test/
  8. https://raw.githubusercontent.com/dollcg24/diabetes_dataset/master/data.csv

Ethical Dilemma Of Artificial Intelligence: Who Takes The Blame When AI Makes An Error

In response to a request by NorthWest Consultants Ltd., I have made recommendations for the use of Artificial Intelligence at Peterson Center on Healthcare. AI already has widespread ramifications that have changed the healthcare sector and Peterson Center on Healthcare want to be part of it. Nonetheless, as AI transforms patient experience and healthcare professional’s routines and workload, Peterson Center on Healthcare must address the emerging dilemmas. The major issues identified include interfering with the patients’ private and confidential data during the algorithm data analysis. Another challenge included lack of trust and accountability, with the hospital lacking anyone to blame in case the AI machine makes an error.

Without a doubt, issues of accountability, safety, and transparency remain essential in AI implementation and the solution proposed in the report include addressing legal and health implications of AI, including issues such as medical malpractice, product ownership, and liability that emerges when the health institution uses the ‘black-box’ algorithms.

Another recommendation includes using AI technology as a complementary tool and not as a replacement for healthcare professionals. User technical and professional expertise remain essential in interpreting AI test results to identify ethical dilemmas.

Artificial intelligence represents the science of developing intelligent machines and computer programs with the ability to think and operate as human beings. According to Juneja (2019), Artificial intelligence relies on a human philosophy on whether a machine can acquire the same intelligence as that of humans. The main functions of AI include developing systems with expertise in everything ranging from behaviors, learning, demonstration, and valuable advice to users. More so, AI aids in the implementation of human thinking and machine intelligence. While companies intend to adopt AI, there have been many ethical issues arising that must be addressed (Bresnick, 2018). In June 2019, Peterson Center on Healthcare has adopted and intends to adopt AI in all its operations to improve the delivery of healthcare services. Based on previous reports, AI would also enhance other operations such as logistic optimization, fraud detection, and research analysis to transform company operations.

The company intends to follow the footsteps of other giant companies such as Amazon, Facebook, IBM, and Microsoft, exploring the boundless landscape of AI. However, previous research by Peterson Center on Healthcare has found major ethical issues associated with AI, hence the need to conduct a risk assessment towards the emerging technology. The major ethical issues to be addressed in the report based on previous research include AI causing unemployment, inequality in relation to wealth distribution, AI effect of humanity and human behavior, Security of AI technology, and inability to account for AI’s lack of genuine intelligence (Peek et al. 2015). Other issues to address include the elimination of AI bias, prevention of unintended consequences and evil plans by researchers, robot rights, and ways human beings can remain in control of the complex intelligent application.

Major ethical issues raised concerning AI include the technology replacing human workers, because of better ability and more intelligence. According to (Luxton, 2014a), machine learning has allowed data scientists and robot engineers to achieve high levels of autonomous intelligence in every aspect affecting human life such as self-driving cars, disease detection, and other effective data analysis. Automation limits human labor through job automation, eliminating physical work linked to the industrialization age (Luxton, 2014a). Labor has been transformed into cognitive labor that prioritizes strategic work to manage emerging issues in the global world. The identified areas where there would be increased job losses include surgical, nursing, and radiology services.

The report confirmed the increased buzzing towards AI development, including a new generation of AI-related tools and services with the power to transform healthcare. The main areas that would affect Peterson Center on Healthcare include medical specialties, cancer detection, and radiology, with AI essential in image interpretation. More so, consumer-facing apps have the potential to offer affordable and easily accessible healthcare services to many people globally (Luxton, 2014b).

According to Peek et al. (2015), smart devices have transformed our homes in terms of security and operation and could aid the health sector through early detection of disease and proper management. However, major issues raised include trust, with current symptom-checker apps, shown to outperform doctors in disease diagnosis. This raises a major ethical dilemma on whether the smart apps are better than doctors in our healthcare facilities are currently. According to Rigby (2015), currently, the world has adopted digital therapeutics, meaning in the future the symptom-checker app could be upgraded to diagnose and prescribe another app with the power to treat symptoms.

Software prescription ability with no human aid means massive losses of jobs and the challenge includes the industry accepting the technology and patients embracing it. The research established a computer program integrated with AI has the ability to diagnose skin cancer more accurately than a certified dermatologist. More so, the program could diagnose in a faster and more efficient manner with a training data set. This means there is no need for labor-intensive medical training. Nevertheless, it remains important to understand the strengths, limitations, and ethical dilemmas associated with AI (Luxton, 2014a). Some believe that it will not take much time before doctors become obsolete due to AI which manages machine learning, natural language analysis, and robotics, all of which apply in the field of medicine.

AI has the ability to integrate and adopt huge data sets of clinical data, performing the diagnosis role and clinical decision-making process, alongside medicine personalization. The perfect example analyzed in the report includes Peterson Center on Healthcare employing an AI-based diagnostic algorithm used in mammograms to detect breast cancer. According to Rigby (2019), this could serve as a second opinion to results provided by radiologists. Another identified area that brings out numerous ethical challenges includes virtual human avatars that could engage in important conversations that directly affect diagnosis and psychiatric treatment. The employment issue is vastly extending to the physical realm such as robots, the physical support systems, and manipulators helping in telemedicine delivery.

Health careers most affected include radiologists and pathologists, because major AI breakthroughs are in imaging analytics and diagnosis. Using Peterson Healthcare facility as a case study, the hospital has fewer radiologists, surgeons, primary care providers, and pathologists, hence making one wonder why AI should replace healthcare staff. The US has a shortage of physicians, particularly in rural areas and the situation is even worse in developing countries (Rigby, 2019). Therefore, help from AI technology will help physicians meet their demands of high cases and manage complex patients, especially in the US where Baby Boomer individuals have increased and will require better healthcare.

Similarly, AI could help manage burnout issues among physicians, nurses, and care providers that would likely reduce their working hours or retire early. Automation of routine tasks that consume healthcare providers’ time such as CT scans could help free time for physicians to handle complex challenges among patients.AI could blend human experience and digital automation, with the two working together to enhance healthcare delivery (Luxton, 2014a). AI handles complex tasks beyond human ability among them breaking down gigabytes of raw data from different sources in a single coded risk score for patients. However, the rush to manufacture robots to engage families of patients regarding a patient remains to be seen.

AI as a powerful technology has raised major ethical issues surrounding safety and privacy, caused by major weaknesses in AI policies and ethical guidelines to advance the healthcare field. The report established that the medical society, lack information related to patient safety and protection. A major issue emerging involves added risk to patient privacy and confidentiality, which interferes with the boundary between healthcare professionals and the role machines play in inpatient care (Rigby, 2019). Addressing privacy issues remains important in changing the education of future doctors to adopt a proactive approach in medical practice. Artificial intelligence algorithms require accessibility to huge datasets during training and validation (Bresnick, 2018). The process to exchange huge data sets between different applications exposes many healthcare companies to financial, reputational, and data breaches likely to affect their operations.

Healthcare institutions must guard their data assets abiding by the HIPAA compliant applications, because of the increased cases of ransomware and cyber-attacks. According to Rigby (2019), many companies including Peterson Center on Healthcare remain reluctant in sharing and moving freely patient data outside their systems and the storage of huge data sets in a single location turns the repository into an attractive target. The solution could include the development of blockchain technology to enhance the personal identification of information from the many data sets (Juneja, 2019). Many consumers, however, remain in doubt regarding the technology, with a 2018 Survey by SAS showing that less than 40% of patients showed confidence that their personal healthcare data managed under AI is safe and secure (Rigby, 2019).

A report in 2013 regarding the privacy of patients’ information confirmed that about 90% of patients must remain conscious regarding their data in healthcare companies because they have the potential to bungle their privacy rights during analytics. Security and privacy remain paramount. Thus, the need for all affected stakeholders to become familiar with the challenges and opportunities provided through data sharing to help AI grow and form part of the IT ecosystem. Data scientists require clean, precise metadata and multifaceted data to ensure AI algorithms identify important red flags and provide meaningful results.

It is assumed that AI is better than humans are because it reduces human medical error that harms and kills patients. However, there is a possibility that smart machines could cause specific medical errors, leaving a major dilemma on who becomes accountable. Google has conducted research to establish if machines have the capacity to support decision-making in a healthcare setting, predicting future occurrences (Juneja, 2019). When a doctor misdiagnoses a patient, the hospital, Doctor, and the doctor’s association remain accountable. With advancing technology, when humans could interpret the decision-making process of computers, then a trustworthiness barrier emerges. A machine should remain free of errors and biases before being trusted, including gender and race bias, which should be filtered into an algorithm.

Without a doubt, Peterson Center on Healthcare must adopt AI and associated technology, but first, hold dialogue regarding ways to enhance patient-doctor understanding on the role AI plays in the sector. This would help stakeholders develop realistic comprehension of AI, identify potential pitfalls, solutions and provide policy recommendations on the benefits of AI. A major strategy includes balancing benefits and challenges associated with AI technology 9Luxton, 2014a). The benefits include enhanced and efficiency of healthcare, but there is a need to reduce the risks especially threats on the loss of jobs, privacy, and confidentiality of patient information (Peek et al. 2015). Other issues to address include informed consent from patients and their autonomy.

The institution must remain flexible when adopting AI technology, by first using it as a complementary tool and not to replacement to healthcare professionals. User technical and professional expertise remain essential in interpreting AI test results to identify ethical dilemmas. The company could learn from the usage of IBM Watson, which is an important clinical decision support tool, which has helped the health industry understand the limitations and benefits of AI (Luxton, 2014b). Proper informed consent requires transparency, hence the need to ensure the adopted systems provide that to stakeholders.

It is important to address the legal and health implications of AI, including issues such as medical malpractice, product ownership, and liability that emerge when the health institution uses the ‘black-box’ algorithms. This is important because users cannot offer a logical explanation of how the algorithms arrive at a particular output. Currently, there is a policy gap in the governance and safeguarding of patient photographic images including application in facial recognition technology, hence threatening informed consent and data security.

Medical Artificial Intelligence Observation

Abstract

Medical Artificial Intelligence (MAI) regularly uses computer techniques for clinical diagnosis and treatment recommendations. AI has the ability to detecting meaningful relationships in a dataset and has been widely used to diagnose, cure, and predict responses in many clinical situations. In our paper focus on discussing the rule-based system in disease diagnosis as an expert system that is an application of MAI. Where AI methodologies have demonstrated great abilities and capabilities in recognizing meaningful data patterns and thus have been commonly experimented as tools for medical studies, in specific to help decision-making about diagnoses and subsequent treatments for every step, as well as prognoses and predictions.

System, Disease Diagnosis

I. Introduction

We can clearly see the great development that occurred in the industry and the tremendous rise in population growth rates, which was directly reflected in the environment, where we note that the rates of environmental pollution have increased recently, and therefore the spread of diseases and epidemics increased, which made it difficult for doctors to diagnose and treat. As a result of the increasing number of patients, it is difficult for doctors to work accurately and quickly on diagnosis and treatment, especially in severe cases and epidemics. This is due to several reasons, the most important of which are fatigue, lack of experience, or seizures, towards dangerous cases. Hence the need for artificial intelligence in the medical field.

Artificial Intelligence (AI) means to make computer perform some tasks by adding humans’ intelligent simulations in their system to be capable to do some humans work. It is defined by Kok et al. (2002) as systems that think like humans, systems that act like humans, systems that think rationally, and systems that act rationally. AI in the medical field used to analyzes complex medical data, when it coded correctly it can be used in virtually every medical field due to his ability to make meaningful relationships within a dataset, better handling of huge information, low error rate as relative to humans and unbelievable accuracy and speed which can be used in many medical fields like drug development, disease diagnosis, health monitoring, digital consultation, managing medical date, analysis of health plans, personalized treatment, surgical treatment and medical treatment (Naveen et al, 2019 & Amisha et al, 2019). And in this paper we will focus on using AI in disease diagnosis.

There are several applications for medical artificial intelligence such as expert systems, machine learning, data mining, and image processing. Medical artificial intelligence reveals the great potential and expectations of applying AI techniques to realistic clinical practices and empirical medical informatics, in particular, the following areas:

  1. AI Techniques in Medicine
  2. Medical Expert Systems
  3. Data Mining in Medicine
  4. Machine Learning-Based Medical Systems

This paper summarizes some applications of Medical Expert Systems with a special emphasis on the rule-based expert system within the last several years. Expert Systems (ESs) is an intelligent computer program using knowledge and inference to solve a problem that is complex enough to require considerable human expertise to solve it. ESs consists of three main components: the knowledge base, Inference engine, and User interface which has the ability to mimic, judge, and justify on the basis of some rules given to it (Durkin, 1994; Gath & Kulkarni, 2012). In this paper, we will focus on the rule-based expert system as a type of expert system in the medical field.

The rule-based system defined as a way of encoding a human expert’s knowledge in a fairly narrow area into an automated system by Gath & Kulkarni (2012). As the following in the paper, we will discuss in part II the issue of this paper, in part III the literature review about medical expert systems, in part IV the methodology of the rule-based system, in part V the result of using the rule-based system in disease diagnosis and finally part VI including the conclusion.

II. Issue

The insufficient medical professional in most developing countries has raised the mortality rate of patients suffering from various diseases. TheMlack of medical specialists may never be overcome in a short span of time. Furthermore, higher education institutions should take urgent steps to produce doctors to be able to able to deal with the increasing numbers of patients as a lot of them already died while waiting for students to become doctors and doctors to become specialists. Current high-risk diseases required patients to seek consultation for diagnosis and treatment from a specialist such as the Covid-19 virus. Sometimes medical doctors may not have sufficient expertise to deal with complex or new diseases so we need to use computer technology which can help to reduce the number of errors in disease diagnosis so we can reduce the number of deaths. Computer programs or software developed by emulating human intelligence could be used to help the doctors make decisions without straightforwardly consulting the experts. This software was not designed to replace the experts or doctors, yet it was designed to assist general physicians and experts in the diagnosis and prediction of patient conditions from certain rules or experiences. In this paper, we will suggest using the rule-based export system as an application of artificial intelligence in human diseases diagnosis.

III. Literature Review

Expert systems are widely used in almost all fields of human expertise that will help users to make decisions where human expatriation and multifaceted decision-making are required, like medical diagnosis, monitoring, financial decision-making, planning and policy-making, strategic assessment, analysis (Gath & Kulkarni, 2012).

Medical artificial intelligence is concerned with developing AI programs that perform diagnosis and offering recommendations for treatment. Medical Expert Systems (MESs) are used mainly in clinical laboratories and educational environments, for clinical observation, or in data-rich areas such as intensive care. What is now being realized is that intelligent programs that offer significant benefits if they fill up with the appropriate rules (Shortliffe, 1993). Medical export systems have been used in full swing since the early 1970s when MYCIN was developed to diagnose bacteria that caused serious infections Stanford University was founded to assist physicians in the diagnosis and treatment of patients with infectious blood diseases caused by bacteremia (bacteria in the blood) and meningitis (a bacterial disease- causing inflammation of the brain and spinal cord underlying membranes).

These illnesses can be lethal if they are not quickly diagnosed and treated. The system was developed in the mid-1970s, and it took about 20 years to be completed. MYCIN is a Rule-Based Expert system (RBES) that uses backward chaining and comprises around 500 rules. The system was written using Interlisp which is an environment built to support programming language Lisp (liebowitz, 1997). There is a lot of medical expert systems such as PUFF (RBES) the system started to work since1979 it uses the existence and extent of pulmonary diagnosis for diagnosis and provides reports for the patient’s records it doesn’t involves direct contact with a physician. ANGY (RBESs) for Automated Coronary Vessel Segmentation from Remote Subtracted Angiograms helps physicians diagnose coronary vessel narrowing by recognizing and isolating coronary vessels in angiograms (Giarratano et al., 2005).

A collection of rules can be used to capture the domain knowledge of a human expert, which used to replicate the problem solving of the expert in that domain . RBES contain both artificial intelligence (AI) techniques such as knowledge-based systems (KBSs) and traditional techniques, such as database management systems (DBMSs) (Russell & Norvig, 2002). DBMSs are used in used in medical expert systems to store, retrieve, and generally manipulate patient data, while ESs are mainly used to conduct patient data-based diagnoses because they can naturally reflect the way experts think and provide the solution to the problem at hand (Mahesh, 2009). RBES is used to diagnosis the following diseases such as malaria, typhoid fever, cholera, breast cancer, tuberculosis, and other diseases (Adewole et al., 2015).

IV. Methodology

The rule-based expert system is an expert system that contains a set of knowledge that is generally represented as a set of rules and facts that are used to explain specific patterns in which data are collected and evaluated using those rules. If the rules are logically satisfied, the pattern is identified, and a problem associated with that pattern is suggested after the deduction process and each particular problem might require specific treatment.

The rule-based approach uses IF-THEN type rules: if it is living then it is mortal. A typical rule-based system includes four basic components (Adewole et al., 2015; Grosan & Abraham 2011);

1. A list of rules base, which is a specific type of knowledge base which contains the rules necessary for the completion of its task.

2. An inference engine matches rules to data to derive its conclusions in which infers information or takes action based on the interaction of input and the rule base. The interpreter executes a production system program by using the following recognize-act cycle consist of four stages;

  • Match the condition patterns in the rules against the elements in the working memory to identify the set of satisfied rules.
  • If there is more than one rule that can be executed, then use a Conflict Resolution strategy to choose which one to apply. If no rules are applicable, then stop or terminate.
  • Apply the chosen rule, which may result in modifying the working memory by adding new items, or in deleting old ones.
  • Check if the terminating condition is fired. If it is, then stop. Otherwise, return to the beginning. The termination condition can either be defined by a goal state, or by some kind of time limitation (as an example: a maximum number of cycles).

3. A user interface enables the user to query the system input information and receive the advice.

4. Database Consists of predicate calculus facts in the knowledge base which fit the IF sections of the rules.

5. Explanation subsystem Analyzes and describes the system’s logical process to the user, which gives the user the ability to ask about the structures about how a decision was drawn, or the evidence used.

6. Knowledge engineer it’s normally an AI-trained computer scientist who works with an application expert to portray the expert’s applicable information in a way that can be incorporated into the knowledge base.

7. Knowledge acquisition subsystem the thorough knowledge base checks and updates for potential contradictions and missing details.

We can replace the manual method of diagnosing the diseases with an expert system which is able to correct all the limitations related to the manual method (Djam et al., 2011).

V. Result

The rule-based expert system as an application of artificial intelligence allows organizations to safe the informations that help the employees to arrive to the best accuracy of diagnosis disease. It minimize errors because massive, repetitive or vital tasks are automated. It reduce the time needed to check the system and analyze the data and reduce costs by accelerating up the detection of faults, it also help in eliminate the work that people shouldn’t west time on it (such as tasks that are complex, time-consuming, or prone to error, jobs where training needs are high or expensive). It facilitate the decision making process and it provide knowledge collection, method analysis, data analysis, and system, verification. It improved visibility of managed system status .

So instead of manual methods that lead to inaccuracy in diagnosing the disease caused by the physicians exhaustion or from a panic attack in serous situations and from new physicians who do not have enough experience, also in times of epidemic manual method may lead to increase the spread of the disease among the medical staff and by the increasing numbers of affected patients the pressure increase so we can use the rule- based expert system to help in the process of diagnosis disease with accurate way and save imported information to help physicians at any time.

VI. Conclusion

It is obvious that the rule-based expert system as an application of artificial intelligence is an adequate methodology for all medical dominions and tasks for the following reasons cognitive adequacy, clear experience, and subjective knowledge automatic acquisition of subjective knowledge and system integration. The rule-based expert system provides an important technology for the creation of an insightful diagnostic decision support system that can greatly help improve physician decision-making and is developed and tested to overcome the various challenges of the conventional disease diagnostic process.

References

  1. Adewole, K. S., Hambali M. A., and Jimoh M. K. (2015), “RULE-BASED EXPERT SYSTEM FOR DISEASE DIAGNOSIS”, Proceedings of the International Conference on Science, Technology, Education, Arts, Management and Social Sciences, Ilorin.
  2. Amisha, Malik, P., Pathania, M. and Rathaur, V. K. (2019), “Overview of artificial intelligence in medicine”, Journal of Family Medicine and Primary Care, Wolters Kluwer ‑ Medknow.
  3. Djam, X. Y., Wajiga, G. M., Kimbi, Y. H., and Blamah, N. V. (2011), “A Fuzzy Expert System for the Management of Malaria”, International Journal of Pure Apply Science Technology, 5(2), 84-108.
  4. Durkin, J. (1994), “Expert System: Design and Development”, Macmillan Publishing Company Inc, New York.
  5. Gath, S. J. and Kulkarni, R. V. (2012), “A Review: Expert System for Diagnosis of Myocardial Infarction”, International Journal of Computer Science and Information Technologies (IJCSIT), 3 (6).
  6. Giarratano, J. and Riley, G. (2005), “Expert Systems: Principles and Programming, Fouth Edition”, Canada.
  7. Grosan, C. and Abraham, A. (2011) “Intelligent Systems a Modern Approach”, Springer Intelligent Systems Reference Library , Vol 17, 154-176.
  8. Kok, J. N., Boers, E. J. W., Kosters, W. A., Putten, P. V. D. and Poel, M. (2002), “ARTIFICIAL INTELLIGENCE: DEFINITION, TRENDS,TECHNIQUES, AND CASES”, Knowledge for sustainable development: an insight into the Encyclopedia of life support Encyclopedia of Life Support Systems, UNESCO publishing.
  9. Liebowitz, J. (1997), “The Handbook of APPLIED EXPERT SYSTEMS”, CRC Press, 66-67.
  10. Mahesh, V., Kandaswamy, A., and Venkatesa, R. (2009), “Telecardiology for Rural Health Care”, International Journal of Recent Trends in Engineering, 2(3).
  11. Naveen, G., Naidu, M. A., Rao, B. T. and Radha, k. (2019), “A Comparative Study on Artificial Intelligence and Expert Systems”, International Research Journal of Engineering and Technology, Vol 6.
  12. Russell, S. and Norvig, P. (2002), “Artificial Intelligence: A Modern Approach, Second Edition”, Prentice Hall.
  13. Shortliffe, E.H. (1993), “The adolescence of AI in medicine: will the field come of age in the ’90s?” Artif Intell Med, 5(2), 93-106.
  14. Tomar, P. P. S., Singh, R., and Saxena, P. K. (2012), “MULTIMEDIA BASED DECISION SUPPORT SYSTEM FOR MEDICAL DIAGNOSIS OF OCCUPATIONAL CHRONIC LUNG DISEASES”, Narosa Publishing House, India.

Artificial Intelligence Technology and Existing Copyright Law

Developments to artificial intelligence (AI) algorithms have made the output of their work indistinguishable from human creations. Considering these technological advances, it has been increasingly difficult to determine who should receive copyright privileges for artwork generated by AI. This paper will attempt to educate the reader on current AI technology and existing copyright law for art. Additionally, it will critically discuss why AI generated art should be seen as ‘original’, whilst also substantiating the claim that AI is as a tool rather than an author and assert that the author of AI generated work is in fact the individual with most control in the process. Lastly, it will look to international legislature to inform how the CDPA should be revised to accommodate the rapidly changing industry of AI.

With particular reference to conceptual artworks and art installations, the extent to which particular types of contemporary artworks can be protected from unauthorized plagiarism in accordance with English copyright law is for the most part hazy.

For the most part the blame for this can be awarded to the United Kingdom’s Copyright, Designs and Patents Act 1988. This is due to the fact that general protection for artistic works is not awarded. This is opposite to its French counterpart. In actual fact, the act oppositely states that in order to be awarded protection it is imperative for ‘artistic works’ to align with certain categories. These categories include, ‘graphic work’, ‘painting’, ‘sculpture’ and ‘photography’, otherwise known as works of ‘artistic craftsmanship’. Moreover, it is a requirement for artistic works to be ‘original’. What this means is that the work originates from the artist in question. However, they are explicitly not to be judged by their ‘artistic quality’ (with artistic craftsmanship works being an exception).

In accordance with English law, works of ‘sculpture’ and ‘artistic craftsmanship’ are the only categories are the only categories that are able to be awarded protection, with reference to artworks that have any involvement with or are made with the basis of three-dimensional objects. Despite this the questions still stand of, to what extent are the boundaries able to stretch whilst staying within the confines of the law? Would artworks like Carl André’s floor based, rectangular construction of 120 fire bricks, ‘Equivalent VIII’ (1966), or in this case Tracey’s Bed (1998) consisting of the artist’s unmade bed, be awarded protection from English courts? With regard to them being innovative and controversial.

Through the use of the well renowned case of Lucas v Ainsworth (2008) EWHC 1878 (Ch), we will be able to extrapolate the method by which courts will approach this issue. In addition to this there is the possibility that it may give light to very important developments in modern and contemporary art, and illuminate some insights as to whether Tracy’s Bed would be worthy of copyright or not.

Continuingly, the case of Lucas v Ainsworth had to do with copying and production of various props that were used in the widely acclaimed film ‘Star Wars’. Ainsworth was mainly concerned with (our particular focus) the helmets of the ‘Stormtroopers’ which were extremely popular and memorable. This could be awarded to the white plastic armor that can be seen in the film. The person (who was also the claimant) was the owner of the intellectual property rights that are to be associated with the film, George Lucas (also the film’s director). Through the use of his production company which he also controls – Lucas Films. On the defendant’s side we had Mr. Ainsworth, who worked on the film, even giving his assistance with design and production of the Stormtroopers costumes, who also happened to be a prop designer. Post working on the film, Mr. Ainsworth proceeded to open a business where he would sell replicas of the costumes, he helped design and produce by way of the internet. In order to stop this from continuing Lucas films filed a court order and damages in the United States.

Furthermore, the judge, Mr. Justice Mann, was giving the pressure and responsibility of establishing the issues of whether the helmets and toy models of the Stormtroopers are bale to be awarded protection in accordance with England copyright law as works of ‘sculpture’ or ‘artistic craftsmanship’. In addition to this he will also have to deal with other auxiliary issues such as passing off. One of the processes will be to examine the boundaries and relationship between laws of copyright and design. In accordance with section 51 of the Act, “anything other than an artistic work or a typeface”, for example, a product that is considered to be mass produced or industrial, from a design model or document in not open for infringement under copyright.

What is the subject of copyright protection? We can attempt to extrapolate a definition for sculpture from the New Encyclopaedia Britannica. There, we can find a definition from Lucas, Mann J. There are two parts of said definition which will be helpful in this discussion. These being “the scope of the term is much wider in the second half of the 20th century than it was two or three decades ago, and in the present fluid state of the arts, nobody can predict what its future extensions are likely to be” and a sculpture not being a “fixed term that applies to a permanently circumscribed category of objects or set of activities”. The term ‘sculpture’ can therefore encompass “non-representational” three-dimensional works of art, and can no longer simply be “identified with any special materials or techniques”.

When looking into what constitutes ‘sculpture’ in accordance with Article 4 (1) of the Act, we may find that there is no sole definition. Instead, this section states that ‘sculpture’ contains casts or models created for ‘sculpture’ purposes. Going down the case law route you will find a mixed bag when attempting to establish the relationship between ‘sculpture’ and utilitarian artefacts in previous English copyright case law. For example, in the case of Breville Europe Plc v Thorn EMI Domestic Appliances Ltd (1995) FSR 77 it was held by Falconer J. that, plastic shapes in a sandwich toaster could be protected as ‘sculpture’. In contradiction, there is the outcome of the case of Metix UK v G.H. Maughan (1997) FSR where, Laddie J. held that the maker “must be concerned with shape and appearance rather than just with achieving a precise functional effect” (this case had to do with molds for functional cartridges).

Additionally, in this case it was stated by the judge that “the process of fabrication is relevant, but not determinative”. Keeping in mind that in the sector of art the definitions of ‘sculpture’ and ‘art’ itself, tend to be predominately subjective. This is important to recognize as in this case, the judge makes us aware of the fact that the before you begin a legal analysis of the terms ‘art’ and ‘sculpture’ you must start by establishing artistic purpose. From this, when looking from the perspective of contemporary art, we must make use of the process of Mann J.

Furthermore, for the purposes of copyright, there was a list of factors that were set out by the judge. These factors are to be utilized when attempting to establish if an object is indeed a work of sculpture. The factors being: “having regard to the normal use of the word ‘sculpture’”; “paying attention to what one would normally expect to find in art galleries described as ‘sculpture’”; “the creation of a three-dimensional object of ‘visual appeal’”; and “the artistic purpose of the creator”. The most significant of these factors suggested by the judge must be the criteria that is last on the list. We will find an example when we look into Carl Andre’s Equivalent VIII (1966). A case where it was heavily weighed on by the judge that “pile of bricks, temporarily on display at the Tate Modern for 2 weeks, is plainly capable of being a sculpture”, however at the same time “the identical pile of bricks dumped at the end of my driveway for 2 weeks preparatory to a building project is plainly not”. He ended off by giving his reason for this thorough consideration. This being that “one is created by the hand of an artist, for artistic purposes, and the other is created by a builder, for building purposes”.

Thus, in accordance with English copyright law, despite their medium, there is a distinct increase of three-dimensional works of art including objects that have been found to video installations, that will almost one hundred percent be awarded protection as ‘works’. This is a result of the approach set out in Mann J. Said works ‘artistic purpose’ will be clearly extrapolated from their institutional context. Not only is this good for the inclusion of Tracy’s Bed. But in addition, the use of this approach may give way to works where the work is blatantly not fabricated by the artist. For example, Marcel Duchamp’s famous readymade consisting of an inverted urinal, signed by the artist, ‘Fountain’ (1917), would have to rely on the requirement of the artwork being fabricated by the artist being relaxed.

Moreover, it may be found interesting that it was held by the judge that the original Stormtrooper helmet itself was not to be considered a sculpture. Instead, they are to be considered a “mixture of costume and prop”. Despite the fact that these are the same from which Mr. Ainsworth copied his replicas. Due to the fact that the ‘primary function’ of the helmet is ‘utilitarian’ and the fact that the necessary qualities of ‘artistic purpose’ were missed, the judge held that said helmet cannot be considered a sculpture. Additionally, it was also held by Mann J. that due to their if not sole, primary purpose being to be played with (normally by children) the toy Stormtroopers that were also copied by Mr. Ainsworth are also not sculptures. Giving reference to the New Zealand authority of Bonz Group v Cooke (1994). In this case it was held that even though a craftsman is someone who “makes something in a skillful way and takes justified pride in their workmanship” and at the same time an “artist is a person with creative ability who produces something which has aesthetic appeal”. Therefore, it comes to no surprise that the judge had no consideration of ‘artistic craftsmanship’ for the helmet and toys to be works of. The Stormtrooper toys and helmets thus cannot be seen as works of artistic craftsmanship due to the fact that the copied toys and helmets had not been created with aesthetic appeal in mind as a purpose.

Traditionally, copyright law has low-level requirements of originality. Thus, we are still questioning whether the judge’s analysis of ‘artistic purpose’ is fully in alignment with these requirements. Especially, the statement that all artistic works, ‘irrespective of artistic quality’ must be analyzed. Still, we can infer that the consequences of the decision made by the judge are inconsequential, as the world of contemporary art has widely benefitted due to contemporary artistic works such as Tracy’s Bed being awarded recognition and protection from English copyright law.

Essay on Artificial Intelligence and Copyright

The concept of artificial intelligence (AI), where an object is capable of human thinking, has been around for centuries, where classical philosophers have attempted to describe human thinking as a mechanical manipulation of symbols and numbers. However, in the present era, AI can be understood as a computer system that can perform tasks that normally require human intelligence. Today’s AI software is capable of producing works that were never been created by computers before such as artistic works like producing music in various genres, writing poems, and even news stories. These works however need protection under the law, hence copyright should be granted to these works. However, the real issues to be considered are whether an AI can be granted authorship ownership of copyright or whether the non-human AI can infringe the copyrights of other creators. Below I will break down these and other issues and further understand the implementation of the present copyright law and the hardships in the implication of new copyright laws in the present rocketing technological growth.

Copyright Under Traditional Law

The idea of copyright has been in existence since the 18th century when the first legislation, the Copyright Act of 1710, was passed by the Parliament of Great Britain. Ever since, copyright law has developed in consistence with the technological changes and advances in sciences, such as photography, films, sound recording and broadcasting. Traditionally, the authorship of copyright was granted if the works were done with great skill, judgment and labor which has failed to stand the test of time. However, the Copyrights, Designs and Patents Act (1988) has been enacted as an excellent effort to keep abreast with technological developments. The Act in Section 1(1)(a) lists the works in which copyright can subsist as “original literary, dramatic, musical or artistic works”. Over time, the courts have attempted to define the word ‘original’. Originality, in this context, does not adhere to the dictionary meaning. Peterson J. opined that “the originality which is required relates to the expression of thought”. The Court of Justice of the European Union (CJEU) in its landmark Infopaq decision stated that copyright only applies to original works, and that originality must reflect the “author’s own intellectual creation”. It is stressed enough that the originality in the work should reflect the personality of the author. Another essential element for a work to be copyrighted is the creation of the mind and human effort. In general, originality in copyright is operated in two levels. One is for general artistic and literary works and the other is for computer programs and databases.

Computer-Generated Works

Computer programs and databases have been included in the WIPO Copyright Treaty under the Berne Convention enacted in the year 1996. The copyright treaty has made sure to include computer programs under copyright protection as they were essential to the growth of software and technology. In the same way, has also found it necessary that there is a need for computer-generated works in the statute. Hence, the CDPA Act defines computer-generated works in Section 178 as “in relation to a work, means that the work is generated by computer in circumstances such that there is no human author of the work”. It is given in Section 9(3) of the CDPA Act that “in the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”. These are literary, dramatic, musical, or artistic works that have no human author. For example, in weather forecasting there is often little human skill or input is required. The forecast is generated by a computer that is in direct communication with a weather satellite. In no way does the operator influence the form or content of the output. In ‘Nova Productions Ltd v Mazooma Games Ltd & Ors’ (2006), it was held that frames appear on a screen when playing a computer game where computer-generated artistic works. The author of these frames was the person who had devised the rules and logic used to create them. The player of the game was not the author, not having contributed any artistic skill or labor. However, there are several computer software that facilitates the production of literary or artistic works. For example, the creation of music or films. Some of these applications are open source which will reach the end user for free. In such a case where the end-user uses the software to create an artistic work, for example, a pop song, the author of the copyright will be the end-user and not the creator of the software. It can be understood in such a way that the end-user has used the software to his convenience as a tool. Having said this, the question arising is whether an AI can be an author of copyright.

AI as the Author of Copyright

Computer-generated works are different from AI works which created their own rules and logic by machine learning. In order to understand whether an AI can become an author of copyright, the basic issue of whether there can be a non-human author must be resolved. In ‘Naruto’, where a monkey has taken several pictures from the camera of David Salter, the question arose whether a monkey can be the owner of the copyright of these pictures. It was held by the court that the statutory provisions do not allow a non-human to be the author of the copyright. It was reiterated in the DABUS patent case that the AI software itself cannot be the inventor of the work created by the AI. Under the present statutory provisions, only a human can become an inventor of a patent as well as the author in copyright. however, the technology in AI is rapidly developing which may give rise to a novel perception of AI which in turn may tend to change the current legal position. The current legal position is that if there is no human interference, then there is no copyright. If an AI creates a work entirely on its own and then if the copyright cannot be granted to anyone, the work will ultimately end up in the public domain. This is bad for society because if there is no copyright protection for work then it will allow anyone to use and distribute the work. this will in turn affect the growth of AI technology. While AI on its own has no interest in owning the work it creates, the AI producers and end-users want copyright protection for the fruits of their AI-created work. Just like the author’s motivation to create work will be diminished if he or she knew anyone could use and exploit their artwork, writing, or song once completed, AI producers’ and end-users’ motivations will be diminished if their AI’s work simply entered the public domain.

AI and Open License Products

Any work either literary, artistic or even educational, they are automatically copyrighted. Copyright does not require to be registered. Copyright is granted to the author since the minute the expression of the idea has been put into a tangible form. For example, when a book is written by an author, it is copyrighted even though it has not been registered, published or circulated. However, when the author gives the license to a particular person, then that person may have the right to use the book within the terms as mentioned in the license agreement. However, an open license is when the same product is been made free for use by the larger public. In the present era of the Internet, it is very common to come across open-source work. Mostly, the products or tools are made free to the public and the users are only required to give proper attribution to the author of the copyright. The AI software may be copyrighted and the author of the copyright is the person who has contributed to the code of the AI. However, the software of the AI itself is different from the work created by the AI because the product given by the AI is entirely different from the codes which govern the AI. As it has already been established, the author of the code and algorithms are the owner of the copyright of the AI as well as the works done by the AI. In a slightly different scenario, if the AI has been licensed to a user and the user comes up with his own idea to create a literary work, then the owner of the copyright of the product produced by the AI will be the end-user. Here the AI can be understood to be merely a tool helping to create the work, such as a pen and paper in writing a novel. Again, the concept of originality, skill, and labor will come into play. If the end-user has not been able to show his expression of an original idea, then the copyright will belong to the creator of the AI.

Coming back to the open license, there are two scenarios to be dealt with here. They are that the AI is using several copyrighted and open-sourced works to analyze and compute to get the final result of the work and the AI itself is been open-licensed for the use of the public. Firstly, when the AI is fed with a plethora of copyrighted products to analyze, does the AI infringe the copyrights? This issue can be simply answered by making an analogy of the human mind. When the AI is fed with the works to analyze the data, there is no copyright infringement until the output given by the AI infringes the original works. When the AI is given access to the copyrighted works, it can be compared to a person who thinks of such a work, for instance, a song. The person may sing the song in his mind as long as he wishes and there is no copyright infringement. Likewise, the AI also does not infringe the copyright unless the output has a resemblance to the original source. The second scenario is that when AI tools and interfaces are available for the public under various open licenses. For example, Microsoft and Google have many of their AI systems which are made available under an open license. Here, it is necessary to understand that the owner of the copyright has permitted the end-user to use the product without any repercussions. Hence, the question of infringement does not arise. Businesses are free to license AI tools like any other software under their own conditions whichever suits their business. In the above said Google and Microsoft AI tools, the AI is given to large public and in return, they get inputs of data from all over the world and they may also use this extensive usage to promote for advertisements. Thus, as of now, there have been no legal issues regarding open-licensing AI tools and interfaces. However, it is necessary to look into new possibilities in the near future where AI itself may infringe copyrights by using open licenses and may decide to go against its own code transforming into a sentient being.

AI and Sentient Beings

Generally, when talking about AI, the first references that pop into the mind are from sci-fi movies such as ‘Terminator’. The general idea of AI that has been fed into the mind is that of a robot that can make its own choices like humans. AI is different from sentient beings by just one point. An AI software is programmed with a source code and then the AI system is given a huge number of data and the AI will run through the data and give its end result by either supervised or unsupervised machine learning. The user of the AI may have a particular output in mind or he may not, but the AI does not go against the source code. The AI will do everything if the source code commands it. However, sentient beings are a currently fictional concept, where the being can choose to whether obey or disobey the source code. In simple words, the AI is been given a choice of free will. This is a concept which we have only seen in movies, it can also be noted that soon we will be able to see the development of AI into sentient beings as huge funds are being invested in research and development. The question of whether these sentient beings can be granted copyright is awaiting in the near future.

Conclusion

To sum up, artificial intelligence is making its way into day-to-day human lives and growing at an unstoppable pace. The introduction of AI into cultural and artistic works enables the machine to learn the nuances and interpretations of art in a human way but also teaches us, humans as what the machines and technology are capable of. It is necessary to understand the law is not static and it is very dynamic to the working of the human environment. As we have developed through ages so will law and it is no news to copyright law. Until now the traditional copyright law has stood the test of time and it will so persist be. The concept of AI being granted a copyright may be distant for us to understand as there are no precedents. However, it is high time to understand that changes or remuneration of laws are indeed essential for the smooth functioning of society. Although, the present law dictates that non-humans cannot be granted copyright, AI tools and interfaces will soon need a sui-generis law for the art and literary works created without any human intervention. However, law is only placed in action after the pieces are set in motion, i.e., only after recognizing that AI can make works, the law will be made. But it is very essential that we need to know and be prepared for what is about to come. Hence, this issue can be discussed with people with experience in relevant fields and thereby find a solution.

Artificial Intelligence Creativity in the Context of Copyright

“At all events my own essays and dissertations about love and its endless pain and perpetual pleasure will be known and understood by all of you who read this and talk or sing or chant about it to your worried friends or nervous enemies. Love is the question and the subject of this essay. We will commence with a question: does steak love lettuce? This question is implacably hard and inevitably difficult to answer. Here is a question: does an electron love a proton, or does it love a neutron? Here is a question: does a man love a woman or, to be specific and to be precise, does Bill love Diane? The interesting and critical response to this question is: no! He is obsessed and infatuated with her. He is loony and crazy about her. That is not the love of steak and lettuce, of electron and proton and neutron. This dissertation will show that the love of a man and a woman is not the love of steak and lettuce. Love is interesting to me and fascinating to you but it is painful to Bill and Diane. That is love!”.

The above is an excerpt from the book ‘The Policeman’s Beard Was Half Constructed’, which was touted as ‘the first book ever entirely written by an artificial intelligence (AI)’. This book was published in 1984 and the author of it is an AI computer program called ‘Racter’, which is a foreshortening for Raconteur. William Chamberlain and Thomas Etter are the programmers who wrote Racter’s program. Racter was fed with vocabulary and English grammar rules, and is capable of creating poems and short prose randomly and independently. The book created by Racter is not pre-programmed. Racter, in fact, creates it on its own, based on the vast array of vocabulary it discovers in its files. The words in Racter’s files are classified in a particular way of what Chamberlain refers to as a ‘syntax directive’, which tells Racter how to utilize the words to construct a sentence.

Apart from Racter, there are also other AI systems that are able to generate literary works. More recently, in September 2020, The Guardian published an article written by an AI system, Generative Pre-trained Transformer-3 (GPT-3), dubbed ‘A Robot Wrote This Entire Article. Are You Scared Yet, Human?”. GPT-3 is a sophisticated language generator of OpenAI that can create human like texts quickly with minimal human input by using deep learning. GPT-3 has been trained on an extremely huge amount of text data, virtually all available data on the internet. Being the largest neural network with 175 billion parameters, GPT-3 is a hundred times bigger than its brother GPT-2. GPT-3 processes approximately 45 billion times the quantity of words that an average human perceives in a lifetime. GPT-3 requires a prompt to start writing. It is then able to recognize and repeat word patterns to anticipate the following words after the initial prompt. To generate the aforementioned article, The Guardian’s staff provided GPT-3 with instructions and some introductory lines, which then enables it to produce eight different articles. The Guardian’s editor then selected parts and merged the eight articles into a single piece by deleting lines, paragraphs and rearranging orders. The editor even went on saying that “editing GPT-3’s op-ed was no different to editing a human op-ed…. it took less time to edit than many human op-eds”.

Apart from literary works, AI systems have also been involved in the creation of artistic works. In the 1970s, Harold Cohen, an artist and an art professor, created an AI program called ‘AARON’. AARON has the ability to create art works autonomously, based on the ‘teachings’ of Cohen. Throughout the years, Cohen has worked to improve AARON’s coding and its artistic knowledge like forms and colors. In 1995, AARON successfully created its first color image by drawing its form and coloring it. AARON’s works have been exhibited in leading art galleries and museums across the world. Furthermore, in 2018, an AI-generated art piece named ‘Edmond de Belamy’, was sold for an incredible amount of USD 432,500 at an auction. This artwork was generated by an AI system called GAN (Generative Adversarial Network), which was created by Obvious, an art collective based in Paris. Another prominent instance would be ‘The Next Rembrandt’, which is neither a recently discovered work of the famous Dutch artist, Rembrandt Harmenszoon van Rijn, nor is it a knockoff. It is an AI-generated masterpiece based on the AI’s analysis of the aesthetic components and the styles of Rembrandt’s works.

As discussed above, we already have machines that can generate creative works that include literary, musical and artistic works. This demonstrates that the production of automated content is becoming a commercial reality. Historically, the act of creating is equated with a human being. The advancements in AI have, however, called this notion into question. AI has become ubiquitous in today’s world and is actively involved in various sectors, but for the purpose of this essay, we will only focus on creative works generated by AI systems. The rise of this new breed of creators has raised issues in copyright law. The development of AI in the realm of creative works not only changes the way in which creative works are generated but also poses new challenges for the copyright regime. This has prompted legislators, scholars and legal practitioners to reconsider the fundamental concepts in copyright such as authorship, creativity and originality as well as the rationales that underpin copyright protection in the first place.

The thesis of this paper is as follows: are artificial intelligence systems really creative in the context of copyright?

There is no agreed-upon definition of what constitutes AI, but it refers to machines with human-like behavior and is the ability of a digital computer to perform tasks that entails intelligence when completed by humans. Nonetheless, intelligence is beyond a monolithic concept, because there is no definitive standard to measure human intelligence. Human intelligence, according to mainstream psychology, is considered as a collection of distinct components rather than just one single trait. However, it has been agreed by the majority of psychologists that creativity is one of the components of human intelligence. Racter, as well as other AI writer and artist described in the introduction, have demonstrated that AI systems is capable of generating works that are ostensibly creative. This assertion is, however, predicated on the premise that the aforesaid AIs possess one of the aspects of human intelligence, namely, creativity.

Next part of the essay will touch upon creativity within the context of copyright law. It will then move on to discuss how both humans and AI systems generate creative works. Accordingly, this section will reach the conclusion that the functioning of AI is not the same as human creativity due to its mechanical nature. As a result, AI-generated works, which do not involve direct human participation, cannot be protected by copyright. It should be noted that though the author believes that AI-generated works are different from those created by humans, the author does not negate the importance of AI-generated work and its societal, cultural and scientific value.

Copyright protection’s focus is on creative works as it was primarily designed to safeguard the fruits of human’s creative endeavors. Thus, it is vital to understand creativity within the context of copyright. In the author’s opinion, creativity is a fundamental issue that determines whether an AI-generated work can enjoy copyright protection alongside creative works produced by humans. There is, however, a scarcity of literature that addresses creativity in relation to copyright law. Creativity is a general concept rather than a legal one. Creativity can be defined as the ability to produce new, valuable and useful ideas or artefacts. From the perspective of computational system, for an AI to be deemed creative, it must strive to generate new solutions that are not carbon copies of the AI’s previously known solutions; and those solutions must be appropriate for the task at hand. Furthermore, within the context of AI, creativity is a question of degree. For instance, Pereira claims that an AI is deemed to be highly creative if it has the capacity to deal with various abstraction levels for the same problem; the ability to assess and criticize its own productions; and the ability to function in more than one domain without having to reprogram.

Albeit having the ability to carry out various functions, it should be reminded that AI system is limited by the functions human creators programmed into it. It is doubtful that the outstanding AI systems that produced the aforesaid ‘Edmond de Belamy’ and ‘The Next Rembrandt’ can ever learn how to create poetry or music on its own as its program codes and data do not encompass such functions. Similarly, Racter, which wrote the excerpt in the introduction, can never start painting without being reprogrammed to do so. Therefore, an AI system cannot go beyond its programmed functions to perform tasks that are not covered by its codes.

Furthermore, AI systems can only produce outputs based on the data that was loaded to it by humans. This indicates that AI’s creation of works will be dependent on existing works or data. The aforesaid portrait ‘Edmond de Belamy’, for example, was produced by an AI system based on 15,000 portraits from WikiArt, an online art encyclopedia, that were fed to it by its programmers. Additionally, in the case of ‘The Next Rembrandt’, the AI program was able to create the artwork in the style of Rembrandt, over three centuries after his death, only because it analyzed hundreds of Rembrandt’s works, focusing on the general characteristics and aesthetic elements. Following a thorough analysis, the AI established the following requirements for generating an artwork in the style of Rembrandt: a portrait of a male Caucasian in his 30s, with facial hair, wearing dark clothing, a collar and a hat with his head facing to the right. The AI and its developers also paid attention to the facial features of the portraits, in particular, the AI analyzed the eyes, nose, mouth and ears of the portraits. In addition, analysis of the face proportions such as the distance between the eyes, nose, mouth and ears was performed by the AI system. Hence, the output of an AI will depend on the information and data it was exposed to. This indicates that AI system itself is not capable of both thinking and inventing. Its ‘creativity’ is the results of its analysis of vast amount of data (human-created works) it was given to. It is incapable of generating an output where it has no relevant data about it. Hence, theoretically, AI is only able to produce output akin to works created by humans.

As discussed above, works generated by AI are determined by its programmed functions and the data it was fed by its human creators. Its functions cannot deviate from its program code and it can never generate works without having access to data related to its output goals. For instance, an AI designed to produce images cannot begin to compose music without much human intervention as the AI system would require human programmers to alter or construct new program codes for it, as well as loading it with music-related data, all of which would require a significant amount of human effort. As opposed to AI systems, humans can create and invent independently without the need of any sample but only inspiration. In the case of humans, internal stimulation can prompt a person to select another area of creativity impulsively. As an example, a human writer can start creating paintings and a songwriter can write autobiographical works without any special training. Therefore, despite the fact that AI has developed to the point that it can produce works that are difficult to tell apart from those created by humans and continues to evolve and become more complex in terms of its functions and the mimic of the human brain operation, its process of creating works is entirely mechanical. With all due respect, AI creativity is not the same as human creativity.

AI systems lack consciousness, subconsciousness or the usual mental states like emotions, beliefs and desires that are immanent in us humans. Albeit artificial neural networks imitate the operation of human brains, it is dubious that they will ever learn to have feelings, sensations, emotions, sentiments and consciousness. Computers, unlike humans, have no need to express themselves as they merely execute commands. Contrarily, each creative work generated by humans comprises the unique mental, spiritual and emotional contribution of its author; consequently, a work mirrors the personality of its author. This implies that feelings, aspirations, inspirations, experiences and emotions are incorporated within human creativity.

Moreover, AI lacks one of the key components of creativity, namely imagination. Taking the aforementioned AARON as an illustration, it needs to be fed with the relevant knowledge it depicts in the works it produced. That is done via a generative system where AARON was fed with a set of abstract rules on the human body including the fact that humans have a pair of legs and arms; as well how body parts appear from different angles, such as just having one arm visible at certain points. Thence, AARON is able to draw a man with only one arm can be seen (the other arm being obscured by someone or something else), but is not capable of drawing a one-armed man. This shows AARON’s inability to imagine things that it has never seen before. Similarly, AI that generate literary work like BRUTUS, a storytelling machine, lacks consciousness and experience derived from senses such as touch, which would have allowed it to better immerse itself into the lives of the characters it creates if it had it. Thus, AI does not have the ability to imagine or fantasize like humans do, which some consider to be a key distinction between machine and human creativity.

Additionally, it has been argued that humans create to meet needs, either to fulfil the author’s own needs for self-expression or to satisfy other people’s needs for cultural, aesthetic and spiritual development. However, it can be argued that AI systems exist and create to satisfy human needs due to the fact that they do not have consciousness, emotions or attitude towards the works they produced and merely follows human instructions. In other words, AI is just a tool in the hands of humans. Humans control and set the goals for the type, form and style of works the AI should aim to generate; the AI then use machine-learning algorithmic calculations to choose from a vast amount of pre-fed data and input to produce an output that is similar to the human-created works contained in its database. The activities of the AI system are completely mechanical and operates on fundamentally different principles. Thus, it is fair to suggest that the nature of the AI’s activities, which create works that are akin to human-generated and copyright protected works, is not the same as human creativity. It can even be further argued that AI’s functioning does not even resemble creativity because such activities are not based on what stimulates humans to create. It therefore follows that works generated in this manner cannot be regarded as the products of creativity, despite those works are of tremendous societal worth. AI-generated works cannot and should not be equated to those created by humans as they are fundamentally different in nature.

Hence, the arguments presented above show that AI’s ability to generate output that has the same expression as works created by humans is not a form of creativity. As demonstrated previously, the reason that AI’s activities resemble human creativity is that it analyses a vast amount of data and synthesizes the findings, which is an imitation of the human brain’s logical reasoning. AI systems do not have consciousness, subconsciousness, emotions, beliefs, desires, imaginations and other mental states inherent in humans. Due to the lack of these ‘prerogative’ mental states innate in humans, AI systems do not seek to express or convey message of any sort through their work. A lot of people have misperceived the functioning of AI system as creativity, when what it actually does is strictly subordinate to the commands of its human creators and strive to achieve the aim set by them by analyzing the data made available to it and produce an output in style of the human-created work contained in its database. An AI does not generate works to satisfy its own needs for self-expression nor to fulfil its aspirations (because it does not have any), but to meet the needs of humans. Hence, the functioning of AI cannot not be considered as creativity. Consequently, as copyright primarily protects the fruits of creative endeavors, it can be argued that AI activities that lack creativity supports the argument against copyright protection for AI-generated works where no direct human intervention is involved. Nonetheless, this does not negate the societal worth AI-generated works have brought us, as well as their significance and greatness.

Artificial Intelligence Applied To Medicine

Introduction

According to the report of the McKinsey Global Institute, “Notes from the frontier of the IA”, from April 2018, the impact of artificial intelligence on the value of companies by sector will affect tourism (128%), followed by transport and logistics. (89%); retail trade (87%); automotive and assembly, high technology, gas and oil , chemical industry , media and entertainment, raw materials, agriculture, and banking (50%); health, public administration and telecommunications (44%); pharmaceutical products, insurance, semiconductors, the aerospace and defense industry (30%) . These statistics give us an idea of the depth that Artificial Intelligence is immersed in our society. In my case , I will only refer to the influence of AI in the medical field.

The big companies in Silicon Valley have opted to include artificial intelligence in developments that are related to the field of health. Advances in terms of artificial intelligence can even translate into economic advantages. According to a report by the firm Frost & Sullivan, by helping to diagnose and detect diseases prematurely, artificial intelligence will reduce health spending. Also , these services are available to people since they can access them from their phones or any other smart object like the clock. Which leads us to ask ourselves : could tomorrow’s medicine be a computer program? People can already, in some cases, know what disease they have without going to the doctor, since in an application installed on their cell phone it is possible to find the answer. In addition , you have the possibility of keeping track of your health status and the application alerts you when something is abnormal.

Although artificial intelligence applied to medicine has allowed research to go more quickly, data still needs to be integrated into the health care service where doctors can know what is available or how to use the new tools.

But what is Artificial Intelligence?

Artificial intelligence is considered a branch of computing and relates to a natural phenomenon to an artificial analogy through computer programs. Artificial intelligence can be taken as a science if it focuses on the elaboration of programs based on comparisons with the efficiency of man, contributing to a greater understanding of human knowledge. If on the other hand it is taken as engineering, based on a desirable input-output relationship to synthesize a computer program. ‘the result is a highly efficient program that works as a powerful tool for those who use it.’ Through artificial intelligence, expert systems have been developed that can imitate man’s mental capacity and relate syntax rules of spoken and written language based on experience, and then make judgments about a problem whose solution is achieved with better judgments and more quickly than the human being. In medicine it is very useful when 85% of diagnostic cases are correct. ‘

Artificial intelligence and Medicine

The acceptance of health-oriented applications and devices have grown exponentially in recent years, and specialists believe that this trend is due to the use of smartphones. Lumoid.com conducted a study that shows this trend. The results of this research indicate that 72% of adolescent women buy a Smart Watch or Smart Band because they can obtain information related to the quality and quantity of sleep and have access to tools to monitor their physical activity. The collection of information is the biggest problem in the use of machine learning. For example, when you want to create an algorithm that predicts a disease you need many medical records, most of which you cannot access. There are hundreds of startups dedicated to health care using artificial intelligence. In the following paragraphs I will mention some medical areas in which artificial intelligence is used.

Patient Information and risk analysis

The patient’s medical information is handled digitally , which allows them to store all their clinical history and have enough data to make predictions to prevent future diseases. Researcher Narges Sharif-Razavian using AI has been able to detect diseases in their initial stage, taking information from the multiple results of laboratory tests; the results of his research are very interesting especially for the early detection of diabetes 2. Another example of how this technology is being applied is the ‘Cardiogram’ project, an application that uses the Apple watch sensor to measure heart rate in real time and, by means of algorithms, it detects when the heart rhythm is not normal and warns the person to be alert. According to them, the results were better than the average diagnostic rate achieved by doctors. Other types of more complex diseases can also be detected by means of artificial intelligence. A team from the University of New York designed algorithms that allow to detect accurately and in time different diseases, cardiac insufficiencies. C on artificial intelligence documents such as medical guides are improved. ‘ A medical guide is a document in which the possible diagnoses are recorded given a list of symptoms. ‘ These guides, when fed by learning algorithms, would allow doctors to give a successful diagnosis of a certain disease.

Diagnosis through image analysis

Normally medical diagnoses are performed through the visual review of a patient or laboratory results such as radiographs or resonances . With AI, researchers using many images of some disease can detect with great precision the presence of that disease. Molescope , un Android application and IOS helps benign moles detection and early detection of skin cancer. Researchers at the University of Bari (Italy) using a magnetic resonance imaging system of 38 Alzheimer’s patients and 29 healthy subjects managed to detect with 84% probability the process of cognitive deterioration that causes the disease. This tool was able to predict with 82-90% reliability that patients will suffer from Alzheimer’s 10 years before the first symptoms.

Monitoring and management of lifestyle

M any of our personal devices have a myriad of tools to manage our lifestyle. Among them we have tools for the analysis of sleep quality in devices such as Smart Bands. ‘ The quality of sleep is usually calculated by recording the time you spend during the deep sleep phase, that is, detecting how much you move while you sleep, these devices also have the option of setting alarms capable of rising comfortably without drastically removing you from deep sleep. ‘Also, other tools that allow us to have a control diet or exercise s and predict the loss of fat tissue.

Nutrition

In the case of nutrition there several tools , VITL (https://vitl.com/), a text assistant (chatbot) which based on a series of questions gives recommendations based on dietary supplements or diets. Other tools like nuritas.com have accurate systems to identify your health and hence starting their learning algorithms recommend the best way to improve your condition.

Surgery and emergency rooms

Devices for monitoring surgeries at the time of an emergency include learning software, is the case of companies such as Gauss Surgical which develops solutions that give an estimate of blood loss, helping to optimize transfusion decisions, recognize the state of hemorrhage and improve patient outcomes. Other devices are the pain meters used in a surgery. The company MEDASENSE built a device that with the help of artificial intelligence could measure the amount of pain a patient suffers.

Hospital administration and patient safety

The administration of a hospital is crucial , poor administration can cost a patient inattention or incorrect delivery of drugs and this can result in the loss of a life. Companies like Mdanalytics carry out market analyzes on new health care techniques, including the latest research on drugs, medicines, government health agencies and medical associations. Some other companies have developed management applications which consider all the best practices to improve health care. These applications work from the capture of patient information, visualization of reports and the schedule of actions taken. Systems of this type mostly work in the cloud and specific services such as Amazon Web Services due to the amount of information to store and the need for rapid processing of this .

Virtual assistants

Imagine that you have a rare disease, making a consultation with a specialist is extremely difficult in some cases and obviously expensive at the same time. Many companies have been given the task of solving this problem, currently there are groups of specialists which can be addressed through specialized online video chats which give the doctor access to all patient information. The use of expert systems on chatbots has also been adopted, that is, a chat that has the possibility of answering technical questions in this case, giving diagnoses.

Mental health

American scientists have discovered an early warning system to detect depression through the photos the patient posts on social networks. The new research, published in the journal ‘EPJ Data Science’, shows that computers, by applying machine learning, can successfully detect depressed people from clues in their Instagram photos. The detection rate of computers is 70% more reliable than the 42% success rate of general practitioners who diagnose depression in person. In addition to these studies, automatic learning systems have also been developed for the detection of stress and anxiety. Within this area of mental health, the idea of using virtual assistants has also been implemented, for this case the knowledge that is intended to simulate is that of a psychologist giving online therapies.

Security Challenge

One of the largest markets focused on health is that of the wearers, in the first 5 months of 2017, Kaspersky Lab researchers detected 7,242 malware samples in IoT devices, 74% more than the total number of samples in the lapse from 2013 to 2016.

The rise of the Internet of Things (IoT) has made hackers attack more and more all the variety of devices connected to the Internet. The global health industry plans to invest around $ 410 million dollars in IoT devices in 2022. Along with this trend has seen the emergence of some procedures such as Medjack, through which attackers seek to compromise the devices that connect to medical devices. E n August this year, one of the leading manufacturers of Pacer issued an alert calling for firmware update about 465,000 patients after discovering a vulnerability that gives the attacker the ability to perform different attack vectors that would impact direct on the health of patients. Currently working on security solutions such as the use of blockchain and communication through more secure protocols to increase confidence in the use of these medical devices.

Summary

There is no doubt that artificial intelligence is transforming health care. Undoubtedly artificial intelligence has significantly improved health care, from everyday applications that help nutritional control to specialized devices for the safer performance of surgeries and applications that provide expert diagnosis. The use of these tools will completely change the way in which patients are treated, directly impacting on better care, follow-up and more precise diagnoses.

References

  1. https://www.medtronic.com/content/dam/medtronic-com/us-en/hcp/diagnostics/documents/patient-ed/atrial-fibrillation-patient-brochure-201603157-en.pdf

AI And Diagnosis Of Lung Cancer

In recent years, AI has gradually entered the medical field. Artificial intelligence continues to break through the sensitivity and specificity of machine-assisted diagnosis at the ‘digital’ level, and exerts its value in multiple scenarios. However, many products are still far from large-scale clinical applications, and it is not easy to obtain the approval of clinicians. For the big names in the medical profession, whether the ability of AI really reflects the value clinically is the key point.

From February 21 to 24, 2019, the 27th Asian Thoracic and Cardiovascular Surgery Annual Conference (ASCVTS) was held in Chennai, India. Thoracic and Cardiovascular Surgery scholars from all over the world participated The event, in-depth exchanges and discussions on the latest progress, clinical experience and basic research in the field of thoracic and cardiovascular surgery. Among them, the value of artificial intelligence in clinical diagnosis has also become one of the important issues. The discussion also affirmed the value of artificial intelligence diagnostic system in the early diagnosis of pulmonary nodules.

Lung cancer is the most common malignant tumor in the world, with morbidity and mortality ranking first in malignant tumors, and has become a recognized killer of human health. The prognosis of lung cancer is closely related to the clinical stage. Due to the late appearance of symptoms and signs, most patients have metastasized at the first visit, and the 5-year survival rate is only 16% due to the missed optimal surgery time. The 5-year survival rate of stage patients can reach more than 70-90%. If it can be found early in the onset, it can effectively improve the prognosis of lung cancer patients.

Therefore, the establishment of a reasonable and effective screening program and simple and effective screening of high-risk groups are the focus of clinical work. Clinical staff are constantly looking for newer and more sensitive imaging techniques suitable for lung cancer screening.

In August 2002, the United States Lung Cancer Screening Trial Group (NLST) led the launch of a randomized controlled clinical trial comparing lung cancer screening with low-dose spiral CT (LDCT) and ordinary chest X-rays, which is by far the most authoritative in the world , The lung cancer screening study with the highest level of evidence.

However, the NLST study also found that only 0.6-2.7% of patients with lung nodules found through the clinical screening practice of low-dose spiral CT were eventually diagnosed with lung cancer. This also means that how to improve the early diagnosis rate of lung cancer in lung nodules under the circumstances that CT nodules are found to be feasible at the early stage of CT screening is the primary proposition facing clinicians today.

In the traditional method of early diagnosis of pulmonary nodules, simple imaging data requires long-term radiological follow-up of the patient to observe the imaging morphological changes, resulting in potential radiation damage; invasive diagnostic operations, even direct surgical treatment, give patients Cause physical and psychological damage. However, the rapid development of novel liquid biopsy and artificial intelligence diagnosis has brought revolutionary dawn to the early diagnosis of pulmonary nodules.

‘Biomarker + AI’ diagnostic mode is the key

It has realized the use of AI medical image analysis to assist doctors in screening for esophageal cancer, lung nodules, diabetic retinopathy, colorectal tumors, breast cancer and other diseases, and the use of AI assisted diagnosis engine to assist doctors in identifying and predicting the risk of more than 700 diseases . In the identification of lung nodules, we can use computer vision and deep learning technology to assist doctors in reading through artificial intelligence medical image analysis capabilities, can accurately locate tiny lung nodules more than 3mm, and determine the sensitivity of their benign and malignant It reaches 85%, and the specificity is as high as 90%.

How to further assist in improving the diagnostic efficiency of artificial intelligence? Today’s liquid biopsy technology can detect trace biological markers released into the blood by early tumors, such as microRNA, circulating tumor DNA, circulating tumor cells, etc. Professor Zhang Lanjun believes that the combination of liquid biomarker biopsy and artificial intelligence technology will inevitably improve the accuracy of early lung nodule diagnosis. As expected, in the ABC model of compound clinical features (Clinic), biomarkers (Biomarkers) and artificial intelligence results (AI), the area under the curve value (a statistical method for evaluating the effectiveness of diagnostic tools, the closer the value is to 1, indicating the diagnosis The higher the efficiency) up to 0.955. In the subsequent verification group, the ABC model also showed higher area value and sensitivity under the curve than other models, which means that the diagnostic model of biological marker + AI can be more accurate.

Today, biomedical imaging (Biomedical imaging) technology is not yet mature, the construction of a multi-modal ‘biological markers + AI’ mathematical model, is the ideal mode of artificial intelligence for clinical pulmonary nodule diagnosis at this stage.

Artificial intelligence diagnostic technology must have a qualitative leap

Artificial intelligence is a huge change to the classification and management of traditional imaging data. It can process tens of thousands of image information quickly and simultaneously, which will greatly save the physical and mental effort of high-quality professional imaging doctors; based on the latest deep convolutional neural network The algorithm’s Tencent search can directly convert or use machine deep learning in different levels of hospitals, reducing the sample size required for different hospitals to convert machine deep learning; with the development of biological imaging technology, artificial intelligence diagnostic technology will certainly occur Great progress.

The Asian Society of Thoracic and Cardiovascular Surgery was established in 1993. It is the largest academic group of cardiovascular and thoracic surgery in Asia. It is the same as the American Society of Thoracic Surgeons and the European Society of Thoracic Surgeons, together forming the world’s three major thoracic surgery academic events. The invitation of Prof. Zhang Lanjun ’s team to attend the conference and make a speech at the conference also means that this forward-looking research conclusion on artificial intelligence diagnostic systems has been recognized by international peers, which not only helps promote the development of disciplines and foreign exchanges, but also Promote the application of artificial intelligence new technologies in clinical practice.

AI medicine is a brand-new ‘medical-industry integration’ field. Driven by big data, artificial intelligence, cloud computing and other technologies, the growth of the ‘AI medical assistant’ has made the medical community full of expectations. The cross-border integration of medical and technology is also continuously promoting the application of artificial intelligence medical imaging, effectively connecting the three parts of AI, application scenarios and value, and ultimately serving clinical and benefiting the people….

Artificial Intelligence in Assisted Reproductive Technology: Boon or Bane

Motherhood is a joy cherished through natural conception and in case of infertility accomplished by substituted reproduction called as Assisted Reproductive Technology (ART). One such ART technique in begetting a genetic child is through Surrogacy[footnoteRef:2]. Though in India, infertility rate is increasing at an alarming speed, the Country as a matter of Public Policy has taken a rigid approach in banning Commercial Surrogacy and allowing only Altruistic form of Surrogacy.

Global Scientific advancement in the reproductive sector coupled by Machine Learning has paved the way for a future sect of Surrogacy through the aid of Artificial Intelligence (AI) which aims at giving 100% accurate results saving Surrogates from the failure, repetitive & painful IVF procedures. The most complicated job in IVF (In Vitro Fertilization) process is to successfully characterize and identity the most viable Oocytes or Embryos by human agencies. AI is been used for embryo or Oocyte scoring selection through Pre-implantation genetic diagnosis (PGD). [3: . Virtus Health Group based in Australia takes help of AI driven tool ‘IVY” which predicts the likelihood of a viable pregnancy from transfer of an individual embryo in a woman undergoing IVF https://www.ivf.com.au/fertility-treatment/ai-in-ivf ]

In the reproductive field, the role of AI is to assist health service providers by assembling inputs on creating a healthy baby by using optimal Surrogate[footnoteRef:4]. AI plays a pertinent role in IVF Surrogacy (through Gestational Surrogate/ Carrier) wherein machines imbibed with AI screen human micro biome[footnoteRef:5] to design most healthy babies free from gene defect/s. AI is been actively used in western countries by IVF / Fertility treatment Centres to carry out successful implantation of embryo into the Uterus of a Surrogate which sets the ground for critical first stage of a successful pregnancy. Through IVF Surrogacy aided by AI, many childless couples have the joy of having their own genetic child when all sorts of ART treatments/ methods fail to yield result. [4: . Flourish – innovative tech wellness Company takes help of AI driven personal wellness tool ‘FLORA’ which provides complete analysis of customized food selections specific to the Surrogates. https://medium.com/uxatcomdes/a-parents-guide-to-ai-driven-gestational-surrogacy-80316867a8fb (last visited on 4th Feb 2019)] [5: . Human micro biome means the full array of microorganisms (the micro biota) that live on and in humans and, more specifically, the collection of microbial genomes that contribute to the broader genetic portrait, or met genome, of a human. https://www.britannica.com/science/human-microbiome ]

At this juncture where Surrogacy is surrounded by unresolved ethical and legal issues, IVF Surrogacy coupled with AI assistance through gestational carriers can be perceived as a boon for intending couples[footnoteRef:6]. [6: . Section 2(r) Ibid.1 – intending couple means a couple who have been medically certified to be an infertile couple and who intend to become parents through Surrogacy.]

Legality of AI in Gestational Surrogacy through IVF

India

Indian Parliament (Lok Sabah) passed The Surrogacy (Regulation) Bill, 2018, prohibiting commercial Surrogacy as unethical and created barriers for couples within the waiting period[footnoteRef:7], LGBT, single parent, live-in relationship couples, couples, couples with undefined infertility issues, couples blessed with single child and who can’t afford second child due to various medical issues to opt for Altruistic Surrogacy. However, the Act permits Gestational Surrogacy if the same is done by a close relative without any monetary benefit apart from the necessary medical expenses incurred pre-during-post-delivery[footnoteRef:8]. [7: . As per the Bill 2018, couples unable to conceive within 5 years of unprotected coitus are only eligible for opting Altruistic Surrogacy as a means of begetting their genetic child. ] [8: . Section 2(b) Ibid. 1]

In India, AI in Reproductive technology area has not been vetted legally to bind key players in the market – Intending Couples: IVF Centres: Gestational Surrogates. As per the Bill 2018, sex selection in any form for Surrogacy is punishable as per the Pre-Natal Diagnostic & Techniques (Regulation & Prevention of Misuse) Amendment Act 2002[footnoteRef:9]. The Act 2002 restricts use of AI mechanism as sex selection tool but permits AI use for embryo screening[footnoteRef:10] if conditions under Section 4(3)[footnoteRef:11] arise. [9: . http://www.ncpcr.gov.in/view_file.php?fid=434 ] [10: . Section 4(2) of the Pre-Natal Diagnostic & Techniques Regulation & Prevention of Misuse) Amendment Act 2002 ] [11: . Section 4(3) Ibid. 2]

In order to tag AI into IVF, no specific legislation governs IVF procedures and hence they follow guidelines laid down by The Indian Council of Medical Research (ICMR)[footnoteRef:12]. The Nation is silent on ‘liability and accountability issues’ arising in case of medical negligence/ wrong diagnosis/ data entry error through which AI mechanisms function. Lack of adequate and specific data privacy laws to protect and secure data in digital space is leading to exploitation of the same for commercial purposes. [12: . Chapter 3 of the ICMR guidelines deals with Code of Practice/ Ethical Consideration & Legal Issues, https://www.icmr.nic.in/sites/default/files/guidelines/Guidline_content.pdf ]

Western Countries

AI has paved way for revolution in the healthcare sector primarily in reproductive section in countries like Australia, America giving rise to patient privacy and accountability in case of breach. In European Countries, General Data Protection Regulation (GDPR)[footnoteRef:13] permits a patient to delete his personal data under special circumstances and entitles him to huge compensation in case of breach. Developed countries like UK[footnoteRef:14], Australia[footnoteRef:15] have their laws in place to deal with human data in digital space. USA[footnoteRef:16] relies on combination of legislation, regulation and self-regulation rather than governmental intervention alone. [13: . Data Subject Rights – Right to be Forgotten under the GDPR, https://eugdpr.org/ ] [14: . Based on the model of EU GDPR, UK has enacted Data Protection Act 2018 focusing more on Data Subject Rights. ] [15: . Privacy Act 1988; Federal privacy Act 1988; Health Privacy Principles; Information privacy Act 2009 (Queens land) ] [16: . Privacy Act 1974; Privacy Protection Act 1980; The Gramm-Leach-Bliley Act 1999; The Health Insurance Portability and Accountability Act 1996; The Fair Credit Reporting Act 2018.]

AI & Infertility issues

AI through machine learning/ data mining and advanced analytics acts as a guide for medical practioners in the health care sector. It assists Embryologists to analyse embryo quality and can be a useful pre-screening tool to identify viable embryos before implantation saving the cost of Ovarian patients undergoing IVF. It also protects Surrogates from undergoing repeated IVF procedures reducing the risk of intending couples abandoning surrogate child with genetic disorders/ abnormalities. AI inbuilt Pre-implantation of Genetic Diagnosis (PGD)[footnoteRef:17] technic can be used to identify over 1500 inherited single gene disorders and the predictions are 99% accurate, the same can be taken into serious consideration by medical experts. [17: . The use of genetic analysis in the course of vitro fertilization to ensure that a baby does not possess a known genetic defect of either parent https://medical-dictionary.thefreedictionary.com/preimplantation+genetic+diagnosis ]

Through IVF Gestational Surrogacy aided by AI, many couples with implantation failure, advanced maternal age, history of recurrent miscarriages can have the joy of having their own genetic child without any physical or mental traumas. AI mechanisms can be used at the basic level to deduct infertility ratios in the couple and the same can be cured through medications solving infertility issues to some extent.

Suggestions/ recommendations

  1. Re-frame laws relating to ART to meet emerging challenges in the reproduction sector,
  2. Promote medical artificial intelligence technology to reach resource poor settings in remote areas,
  3. Encourage R & D in AI related public healthcare sector at the National level on the model of IBM – Watson and Manipal Hospitals[footnoteRef:18]. [18: . Manipal Hospitals has partnered with IBM for utilizing ‘Watson’ software at its Oncology department. The Machine assists doctors in diagnosing and providing treatment to cancer patients. https://watsononcology.manipalhospitals.com/ ]
  4. Frame Intellectual Property/civil/ criminal liability issues with respect to privacy, data access, data security, confidentiality, ownership and informed consent in case of breach by machine technology.
  5. Set up Special Courts to address technical issues related to AI and ART.
  6. Create awareness on PGD to intending couples suffering from chromosomal/ genetic defects which could pass on to their offspring.
  7. Lay down regulatory mechanisms to promote end-to-end human monitoring on machine learning to avoid biased data input resulting in biased decision by the system.

Conclusion

“We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten” [19: . Bill Gates (1996) https://www.internationalsos.com/client-magazines/in-this-issue-3/how-ai-is-transforming-the-future-of-healthcare ]

AI in reproductive healthcare is really a boon rather than a bane. Keeping in mind alarming infertility ratio and legal prohibition on certain forms of ART practices, it is advisable to rely on this Machine Deep Learning in solving problems of childless couples. Intending Couples who are trying to conceive can be saved from unsuccessful, recurrent IVF cycles, financial, physical and even emotional pressure. As and when the Society marches towards technology, the laws need to be updated to stand as a strong fort to safeguard people from its misuse or pave the way for benefits to the aggrieved class.