The Impact of AI in Accounting Field

The Impact of AI in Accounting Field

1. Introduction of the problem

Artificial intelligence nowadays plays an important role in lots of area, includes the accounting field. What does the effect AI have on accounting? AI can be helpful to accountants in many ways, but also being a risk to accountants in the meanwhile. We will talk about the introduction, the research questions and hypothesis, the background, the significance and implications of the study, 2 cases’ study analysis, the comparative case study analysis, and the conclusions and references. To discuss the advantages and disadvantages of the employment of accountants and AI, the final essay discuss it through three questions:

  1. What is the advantages of using AI?
  2. How do different accounting firms using AI by their feature?
  3. What is the negative effect of AI to accountants?

2. Hypothesis of the problem

The hypothesis of the essay is that if Artificial intelligence creates benefit of the work for The Big Four (Deloitte, Ernst & Young (EY), KPMG and PricewaterhouseCoopers (PwC)). it will be further ahead of others in other accounting firms. The other hypothesis is that if the AI improves the efficiency of accounting work, the skill of data analysis of accountants will improve a lot.

3. Background of the problem

Tax preparation, auditing and strategy consulting are services that are severely relying on the use of human capital. The use of artificial intelligence has a negative impact on the business models. Technologies such as natural processing (NLP) and robotic process automation (RPA) compete in simple hours while human beings take weeks to do so.

AI is transforming tax and auditing process, but why and how is AI transforming AI profess. This problem is an issue of higher human labor force payments and lower AI costs. The main problem is the innovations of AI technology which reduces the costs of AI using incredibly.

If the company make innovations, it will become advance in others competitive. Apparently, each of companies, EY and Deloitte, uses a slightly different power for developing AI technologies.

After many failed efforts, AI improve its accuracy and speed a lot. (A man Mann, 2019)

AI’s point in popularity and presence in publicity machineries for tech and electronics does not mean it’s a brand-new idea. It’s been a goal and goalmouth of computer science pacesetters since the early 19th century and has grown amazingly in the 21st century. In recent years, AI has combined itself with other aspects of our lives as the influence behind the robot revolution, like Siri.

Artificial intelligence has become a used phrase to some degree in recent years, and while it’s become hard to distinct the publicity from its applied potential, AI is deep-rooted in a very realistic idea.

If the accounting firm make innovations, it will become advance in others competitive. If EY and Deloitte both use AI to reviews lease accounting standards, then the AI system is three times more consistent and twice as efficient as previous humans-only teams. AI opens up all sorts of opportunities to bring great value to the client. When IRS use AI to review a new lease regulation, large companies must manually re-examine thousands of older ones to obey it. Using NLP to citation information and a human-in-the-loop to confirm the results, the AI system is three times more reliable and twice as resourceful as previous humans-only teams. ((Adelyn Zhou, 2017)

Deloitte uses natural language generation (NLG), the creation of text by its computers to deal with 50,000 text return every year for their clients who have employees in competitive financial situations. Using NLG, Deloitte creates detailed narrative report of individual tax return. Relying on these reports, its tax professionals offer more targeted financial advice to clients during consultations. (Adelyn Zhou, 2017)

EY starts small and aims to demonstrate immediate ROI. Chris Mazzei, chief analytics officer and emerging technology leader at EY, explains, “You have to take it from a business value perspective first, rather than a tech perspective first.”

Other issue is: Each of the three companies, Deloitte, EY and PwC, employs a slightly different process for developing AI technologies.

EY starts small and aims to demonstrate immediate ROI. Chris Mazzei, chief analytics officer and emerging technology leader at EY, explains, “You have to take it from a business value perspective first, rather than a tech perspective first.”

At Deloitte, a 70-member inner company innovation team focuses on all features of developing skill; the team devotes 80% of its time to AI. Its command is to create useful cases that guide AI-related investments across Deloitte.

Deloitte, is a very good company. I like Deloitte company very much. I also like the EY company. AI could improve the technology incredibly so if the company want to advance in all the other companies, it has to use AI in the all ways. If others use AI but one of the big four doesn’t use AI, it will fall behind. Another reason for them to use AI is that AI is indeed useful in accounting field. AI, as a technology, could not only improve the calculating speed, but also replace human labor force in another more intelligent field, like tax preparation, auditing and many other field.

Another issue is that although they big four all us AI, but they use it in a somewhat different way. In other words, they see it as a tool, rather than a technology. The leaders don’t care the scientific development of AI, instead, they care the economic value it can bring. The big four use AI technology from their own advantages and disadvantages, to make largest benefit for themselves.

PwC like to use AI technology, especially in its client service business, but EY like to use AI technology in small parts of AI, but demonstrate a lot in the ROI. However, in Deloitte, a 70-member internal team, devotes 80% of their time to focusing all aspects of the new technology.

Some negative effects are existing. It is like some kind of jobs are influenced more in the field of accounting. Some kind of jobs are influenced more accountant indeed. Accountants need to improve their efficiency and professional ability and quality. Some basic data processing work could be dealt with by the technology. Besides, for the companies and corporations, how do them improve their efficiency and lower down the cost is a large issue. If accountants don’t make efforts, then they will lose their jobs will not become. There are some worries that accountant entirely will be replaced by AI, actually the worries are spare. AI could do data analysis to some degree, but managers need accountants to get potential information from the data fetched from AI and help their decisions. Accounting jobs have 2 kinds, managerial accounting and financing accounting. Accountants who do auditing belongs to financing accounting. With the advantage of RPA, the auditing process could be more effective and less mistakes, so the demand of accountants reduces, which lead to the salaries of accountants reduces. For entrepreneurs, they will find the use of AI could improve efficiency, lower down the cost which bring the benefits out of their expectations. Using AI, accountants could save a large amount of time in collecting data, at the same time, lower down the risk of making mistakes and reduce the time of making decisions upon this data.

Do you like accounting. Do you know what will happen if there is AI. If there is no accounting, business will not make progress. If there is no AI, then many good effect will not disapeer. The speed of development will decrease. This is what we don’t want. I hate AI as an accountant. If there is AI, I will have more salaries. If there is no AI, managers will be unhappy. If there is AI, then managers will be happy. Everyone besides will benefit from its application in accounting. If everyone benefits from it, then world will be better off. However, as a technology, if will have side effect. As a technology, even the big four may not fit in it. Those who don’t fit in will fall behind. Those who fit in will be in front of the business. For accountants and accounting corporations, AI is a double-edged sword. If the character can change pressure into motivation, AI is a good thing. It was the best of times, it was the worst of times. AI was the best of technology, AI was the worst of technology. If big four could take their measures and benefit most from AI, AI will further widen the gap for them and their competitor. Otherwise, this will bring them down from the pedestal.

References

  1. Adelyn Zhou (2017), “EY, Deloitte And PwC Embrace Artificial Intelligence for Tax And Accounting”
  2. Daniel Faggella (2019) AI in the Accounting Big Four – Comparing Deloitte, PwC, KPMG, and EY
  3. Julia Irvine (2018) Deloitte reveals global revenue up 11.3%
  4. Aman mann (2019) Voices How AI is transforming the jobs of accountants

Reflections on Whether Computers Can Replace the Law

Reflections on Whether Computers Can Replace the Law

Legal reasoning is an old concept, capable of being found way back in the Roman times. Decisions were justified by reference to exemplar factual situations and reasoning of other jurists, often seemingly guided by own views. The current age decision-making contains slight differences. To understand why a judge argues a case in a certain way, it is necessary to consider the reasons used to justify his reasoning. As Hunh suggests, a judicial decision is capable of being reduced into a syllogistic reasoning by use of deductive logic, thus, first of all, this work will explore how such syllogism comes to existence and what factors are necessary to be considered in its justification. Reference will also be made to the three main judicial decision-making models to provide clarification on how judges reach a solution in relation to formalism and realism. The above-mentioned information will then be incorporated into the context of computers and artificial intelligence to uncover that certain aspects of legal reasoning are not capable of being reproduced by a machine sufficiently to replace human lawyers.

Human Legal Reasoning

When a human lawyer engages in legal reasoning, the process he uses can often be simplified into that of a syllogism. A rule is applied to the facts to create an outcome. Following Sir Edward Coke’s view that “reason is the life of law”, syllogisms may be assumed to be of a purely logical nature. Such argument may be partly true in simple cases, however even there it is often necessary to be able classify the facts based on the rule , due to the uncertainty of language questioning whether a fact is included under the rule of law or not. Classification is labelling of a fact as an instance of a rule. On the other hand, the more difficult cases, where there is a choice of conflicting or ambiguous rules, are resolved by the process of “evaluation and balancing”.

As legal reasoning is, primarily, rule-based decision-making, when faced with multiple relevant rules, it is necessary to be able to distinguish which of them is the most applicable to the facts. Rules can derive from principles that justify them, which are to be weighed against each other when applying law. Dworkin suggests that the difficulty arises when despite the clarity of the rule or the principle, the judges question its validity and overall applicability in terms of fairness. The two-stage proportionality analysis is engaged, as majority, if not all rules deal with restrictions or positive obligations to engage. The limitations of the rule are weighed against costs and benefits of it . Factors such as whether less restrictive means are possible are also considered. The strength of the rules range form being merely indicative to being conclusive and their guidelines can be found in the relevant provisions or case law (Practice Statement, etc.), although they are often still not clear and have to be interpreted.

Interpretation of rules provides further difficulties to lawyers. Not only the law is open textured and provides no clear guidelines, but there are also different methods of interpretation and there is no order of priority. The French philosopher Montesquieu proclaimed that “the national judges are no more than the mouth that pronounces the words of the law, mere passive beings, incapable of moderating either its force or rigour”. However, it became obvious that interpretation was necessary in some cases as to follow the words literally is to believe that rules are written perfectly. The Law Commission Report stating this has also attempted to encourage the use of purposive approach instead of the other three. The purposive approach was endorsed over twenty years later by the landmark case Pepper v Hart which allowed extraneous material constituted prior to the enactment of the rule to be used in cases of ambiguity.

Problems may arise when the freedom of interpretation is put into the context of realism. Realism is a type legal reasoning based on one’s ideology. It inclines a person to interpret information and to reason in a way that it affirms their previous beliefs about an issue. Due to it being a part of human nature, it is safe to say that every lawyer utilises this method, unless he has trained himself to consciously not to. However, as long as the person is capable of admitting failure in finding justification of their answer, then “judicial hunch” may not be so harmful but rather support Hutcheson Jr.’s argument that it is the “true basis of legal reasoning”. Nonetheless, to truly understand how humans perform legal reasoning, the realistic approach should be explored in greater depth, especially in relation to the attitudinal model.

Attitudinal model compares the ideology of judges with how they tend to vote on specific topics to identify intellectual influences at play. It is used to attempt to predict future decisions made by those judges. In the US Supreme Court, a trend can be seen of justices adhering to their beliefs quite often when engaging in decision-making, yet the predictions still are not hundred percent correct as certain cases may involve topics on which the judge only has an unstable opinion that can be swayed and thus vote contrary to his ideology. The UK, on the other hand, faces more difficulties with the Attitudinal model as its Supreme Court not only hears fewer cases, but also not all judges are present, thus there is a possibility of different combinations of judges affecting the outcome of a case.

On the other hand, we have the strategic model which, while not the opposite of the attitudinal model, it does rather focus on the process of legal reasoning, rather than on the input. The goal is not to promote own views, but to have a solid collective decision that will not be overturned. Such outcome is achieved by strategic bargaining between the judges during the judicial conference part of the decision-making. In the UK, the court held a decision that compensation was owed for damage dealt during the war which was consequently overturned by an Act of Parliament.

Computers and Aspects of Legal Reasoning

In order to determine whether computers can replace lawyers, the above-mentioned legal reasoning processes need to be examined in the light of artificial intelligence. Issues discussed here will be classification of facts according to rules, balancing act, interpretation and lastly the concept of prediction of an outcome of a case.

Classification, as aforesaid, is not a logical process. While in one situation, it may be decided that a fact is an instance of a rule, in another it can be held that it is not even though it seems that it a would be perfectly reasonable decision. AI are already capable of being taught to profile information to be able to classify. One such instance is the personalised pricing where depending on information that was collected about a customer, the prices will be adjusted accordingly to suit their trend. However, there are also AIs who were unable to do this task correctly. The Odyssey, court software which records the judgements of judges, for example in terms of warrants, has in this case erroneously classified individuals to be arrested, while some even had to register as sex offenders. Google AI has also been seen to make a mistake when it labelled photos of two darker-skinned people as “gorillas”.

Balancing of sources is another issue to be briefly discussed. Ernst makes a convincing argument in this regard. He states that the process used to balance information, algorithms are not to be underestimated. The data, while vast and seemingly objective due to its variety, is chosen based on values and preferences of the creator, who has to make a decision which criteria bears what weight.

Interpretation can cause difficulties even for humans, especially what rule is to be used to read the source. Number of AIs currently manufactured, such as Alexa or Siri, utilise natural language processing to aid them. Bouazis defines this as “a computer program’s ability to understand spoken and written language”. However, merely understanding the formal and slang language as it is in a dictionary is not enough. Following Baude and Sachs’s argument, legal interpretation is not just concerned with the meaning of the words but what law they signify and what is its position within the wider body of law? Dervanović further articulates this argument by indicating limitations of language in reference to “unused legal provisions, of which validity does not expire by non-usage, while elements of language may cease to exist without usage”.

Lastly, foreseeing a decision of a case by use of a computer may not be far. Blue J Legal created a software that is capable of predicting the outcomes of Employment Law cases. The ‘Classifier’ is said to “uncover hidden patterns in case law by discerning relationships between individual factors and court decision outcomes”. Thus, it is capable of predicting the outcome of a case based on the responses given to the twenty or so questions to be answered with up to 90% accuracy. By using the software, other decisions can also be explored by changing any one answer to the questions given.

Conclusion

To conclude, while AIs have seen vast improvements and are able to ultimately foretell decisions of a court based on facts and precedents given, the process that reaches that stage is uncertain. Computers are still unable to grasp the concept of rules and their place in the law as a whole. Furthermore, classification test has also been unsuccessful in numerous instances and balancing is subject to bias of the creator of the computer. Computers cannot replace lawyers however, by working alongside them but still leaving humans to be the ones to call the shots, productivity of humanity can be greatly increased. This would allow for more focus into the more pressing matters of legal reasoning while still maintaining high standards and accountability.

Abductive Reasoning as the Key to Build Trusted Artificial Intelligence

Abductive Reasoning as the Key to Build Trusted Artificial Intelligence

Modern AI Systems have seen some major advancements and breakthroughs in recent years. However, almost all of them use a bottom-down approach where machines are heavily trained in as many situations as possible to increase accuracy and minimize their margin or error. This is a rather inefficient and at times untrustworthy way to teach machines. This approach requires large amounts of ‘good data’ and even with that, it is always uncertain that AI can be trusted in abstract situations. Abductive reasoning is a type of inference where a conclusion is made using whatever information is available at the time and a ‘best explanation’ is generated with that available information. This is much like how humans make decisions and are very intuitive. If this approach can be implemented in AI that it will be possible to trust AI in much more circumstances than ever.

Thesis: I claim that if we implement a top-down ‘abductive reasoning’ approach in AI Systems then it will help us reach the next generation of AI which will be more human-like. This approach will strengthen our trust in AI and increase its adaptability in diverse circumstances.

Introduction

Artificial Intelligence (AI) and Machine Learning(ML) have become buzzwords in today’s technological world where almost everyone wants a piece of the AI-ML cake. However, most people don’t understand how AI works, what it takes to build a system that is capable of learning on its own and that modern techniques which theorize AI are very limited and cannot be trusted to do many tasks that we humans take for granted. Kelner and Kostadinov (2019) state that almost 40% of European start-ups that are classified as AI companies don’t actually use artificial intelligence in a way that is “material” to their businesses. Modern AI systems are heavily trained in different environments to eliminate their margin of error and increase accuracy, this is very inefficient and time-consuming. Why is it that modern AI systems cannot be trusted to perform some tasks that humans find very basic? Why do start-up AI organizations find it difficult to compete with the old AI organizations even when the former has more qualified and experienced people running it? Why do modern AI systems fail when exposed to new environments? All of these questions come down to one single answer. Modern AI is heavily dependent on Data. Without enough data, AI systems fail in most cases. Data is the foundation of Artificial Intelligence and Machine Learning, it is a common saying in the technological world that “whoever owns the data is king”. This type of thinking and modern AI practices has paved the way for fields like Data Science. However, it is not possible to always rely on data because there are times when machines have to go through abstract situations and are required to make quick decisions but do not have enough data and consequently, fails. AI needs to be trained differently, a new approach is required to make trusted AI. Abductive reasoning may just be the solution to these problems.

Abduction, Deduction and Induction

You happen to know that Drake and Josh have recently had a terrible fight that finished their friendship. Now some friend of yours tells you that she just saw Drake and Josh working out together. The best explanation for this that you think is that they made up. You conclude that they are friends again.

In Silver Blade, an Arthur Conan Doyle’s short story, Sherlock Holmes solves the mystery of the stolen racehorse by swiftly grasping the significance of the fact that no one in the house heard the family dog barking the night of the theft. As the dog was kept in the stables, the natural inference was that the thief must have been someone the dog knew.

In these examples, the conclusion does not have a logical order from the premises. For instance, it doesn’t logically follow that Drake and Josh are friends again from the premises that they had a terrible fight which finished their friendship and that they have just been seen working out together; it does not even follow, we may suppose, from all the information you have about Drake and Josh. Nor do you have any concrete statistical data about friendships, terrible fights, and working out that might lead us to an inference from the information that you have about Drake and Josh to the conclusion that they are friends again, or even to the conclusion that, probably (or with a firm probability), they are friends again. What leads you to the conclusion, and what according to a considerable number of philosophers may also lead to this conclusion, is precisely the fact that Drake and Josh’s being friends again would, if true, best explain the fact that they have just been seen working out together. The type of inference exhibited here is called abduction or, somewhat more commonly nowadays, Inference to the Best Explanation (Douven, 2017).

Abductive reasoning is a type of reasoning which usually starts with an incomplete set of observations or inferences and goes from there to the likeliest possible explanation. It is used for making and testing a hypothesis with whatever information is available (Kudo, Murai & Akama, 2009). This is the type of reasoning that humans use most often. Apart from Abduction, there are two other major types of inferences – Deductive and Inductive. The difference between deductive reasoning and inductive reasoning is that the former corresponds to the distinction between the necessary inferences and later corresponds to the distinction between the non-necessary inferences.

In deductive reasoning, what you infer is necessarily true if the premises from which it is inferred are true, that is, the truth of the premise guarantees the truth of the conclusions (Douven 2017). For instance: “All apples are fruit. Macintosh is an apple. Hence, Macintosh is a fruit”.

It is important to note that not all inferences are of this type. Consider, for instance, the inference of “Adam is rich” from “Adam lives in Manchester” and “Most people living in Manchester are rich”. Here, the truth of the first sentence is not guaranteed (but very likely) by the combined truth of the second and third sentences. Differently put, it is not always the case that when the premises are true, then so is the conclusion: it is logically compatible with the truth of the premises that Adam is a member of the minority non-rich population of Manchester. The case is similar regarding your inference to the conclusion that Drake and Josh are friends again on the basis of the information that they have been seen working out together. Perhaps Drake and Josh are former business associates who still had some business-related matters to discuss, however much they would have liked to avoid this, and decided to combine this with their daily exercise; this is compatible with their being firmly decided never to make up.

It is common to group non-necessary inferences into the category of inductive and abductive inferences. Inductive inferences are the types of inferences that are purely based upon statistical data. For instance: “91 percent of the UofT Students got an average of 90+ in high school. Tanmay is a UofT Student. Hence, Tanmay got an average of 90+ in high school”.

However, the important statistical information may also be more elusively given, as in the premise, “Most people living in Manchester are rich”. There is debate about whether the conclusion of an inductive argument can be stated in purely qualitative terms or whether it should be a quantitative one—for example, that it holds with a probability of 0.91 that Tanmay got an average of 90+ in high school—or whether it can sometimes be stated in qualitative terms—for example, if the probability that it is true is high enough—and sometimes not.

The State of Modern AI

There have been amazing advancements in AI during the past few years. Machines can recognize people and images, they can transcribe speech and translate languages. They can drive a car, diagnose diseases and even tell you that you’re depressed before you know it based on how you type and scroll (Dagum, 2018). The concept of AI has been around for a while now, so why suddenly in these recent years have we seen so many advances in AI Systems? The answer does not lie on the algorithms but instead, it’s all about the data. Whenever we hear about AI, it is often accompanied by words like deep machine learning and Big Data. The key point being that there must be enough good data and the expensive infrastructure to process that data. In fact, the top 20 contributors to open source AI include Google, Microsoft, IBM, Uber etc (Assay, 2018). These biggest players are readily open sourcing their AI pipelines but what are they not open sourcing? They are not open sourcing data because it’s their number one asset.

While many sci-fi movies depict AI by highlighting its incredible computational power, in reality, however, all effective practices begin with data. Consider Maslow’s Hierarchy of Needs, which shows a pyramid which includes the most basic things needed for human survival at the bottom and the most complex need at the top. Similarly, Monica Rogati’s Data Science Hierarchy of Needs is a pyramid which depicts what is necessary to add intelligence to the production system. At the bottom of the pyramid is the need to gather the right data, in the right formats and systems, and in the right quantity (Rogati, 2017). Any application of AI and ML will only be as useful and accurate as the quality of data collected. When starting to implement AI, many organisations find out that their data is in many different formats stored throughout several MES, ERP, and SCADA systems. If the production process has been manual, very little data has been gathered and analyzed at all, and it has a lot of variance in it. This is what’s known as ‘dirty data’, which means that anyone who tries to make sense of it—even a data scientist—will have to spend a tremendous amount of time and effort. They’ll need to convert the data into a common format and import it to a common system, where it can be used to build models. Once good, clean data is being gathered, manufacturers must ensure they have enough of the right data about the process they’re trying to improve or the problem they’re trying to solve. They need to make sure they have enough use cases and that they are capturing all the data variables that are impacting that use case.

Artificial Intelligence can do wonders when it has access to data that is sophisticated but is it possible to collect data for everything about everything? No, it is combinatorially explosive. Just take the number of possible moves in a game of Chess or Go, if calculated correctly, the number exceeds the number of atoms in the universe by a large factor (Silver D et al., 2016). One should realize that these board games are relatively much simple problems as compared to real-world problems such as driving a car or performing medical surgery. How can we trust AI so perform such tasks, knowing that there will always be situations when data won’t be enough for the machine to come to a conclusion. Moreover, AI is also at a risking of failing when the data gets corrupted or when the data is incorrect. In this case, the machine will be able to a conclusion but chances are that the conclusion will be wrong. A German pilot who left his plane on autopilot AI got locked out his cockpit and the autopilot crashed the plane. Later, using the black box, it was found out that the autopilot had incorrect and corrupted data which led to the crash of the plane (Faiola, 2015). An AI system with trusted autonomy should be sophisticated enough to override such commands, even when the correct overrides are imputed.

Artificial Intelligence of the Future

It is one thing to win a game of Chess or Go against a world champion (Silver D. et al., 2016), but it’s entirely another when it comes to risking our lives in driverless cars. That is the difference between an AI system that memorized a set of rules to win a game and an AI system that is trusted to make spontaneous decisions when the number of possibilities is endless and impossible to compute. Modern AI machines work on the principle of Deductive and Inductive reasoning where the computers are provided with complete sets of data and strict rules upon which they make their conclusions. This type of AI is very limited and difficult to trust in many situations where human-like reasoning ability is required which works on intuition and abductive reasoning.

In the past, AI advanced through deep learning and machine learning, which used the bottom-up approach by training them on mountains of data. For example, driverless cars are trained in as many traffic conditions as possible to collect as much data as possible. But these data-hungry neural networks have a serious limitation. They have trouble handling the ‘corner’ cases because they have very little data about it. For instance, a driverless vehicle that is capable of handling crosswalks, pedestrians, and traffic have trouble processing rare occurrences like children dressed in unusual Halloween costumes, crossing the road after a night session of trick and treating. Many systems also easily get fooled. The iPhone X’s facial recognition system doesn’t recognize ‘morning faces’ – a user’s puffy, hazy look in the morning (Withers, 2018). Neural networks have beaten chess champions and conquered the ancient game of Go but they get easily fooled by an upside down or slightly altered version of a photo and misidentify it.

Many companies and organisations have already started understanding the importance of a top-down approach in AI and so in the future, we will have top-down systems which don’t require loads of data and are more spontaneous, flexible, and faster, much more like human-beings with more innate intelligence. There are 4 major areas where work needs to be done in order to implement a top-down approach in AI systems (Carbone & Crowder, 2017):

  1. More efficient Robot reasoning. When machines have a conceptual understanding of the world, as humans do, they use far less data and it is much easier to teach them things. A Union City start-up backed by people like Mark Zuckerberg and Jeff Bezos, Vicarious, is working towards developing ‘general intelligence for robots’ enabling them to perform tasks with accuracy with very few training sessions. Consider the CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), they are very easy for humans to solve but surprisingly difficult computers. By computational neuroscience, scientists at Vicarious have developed a model that can break the CAPTCHA at a much higher rate than deep neural networks with greater efficiency (Lázaro-Gredilla, Lin, Guntupalli, George. 2019). Such models which have the ability to generalize more broadly and train faster are leading us to the direction of machines that have a human-like conceptual understanding of the world.
  2. Ready Expertise. The ability to come to conclusions spontaneously and modelling a machine on the basis of what a human expert would do in situations of high uncertainty and little data, an abductive approach can help AI beat data-hungry approach which lacks all of the mentioned abilities. Siemens is applying top-down approach in their AI to control the highly complex combustion process in gas turbines, where air and gas flow into a chamber, ignite and burn at temperatures as high as 1,600 degrees Celsius. Factors such as quality of the gas to air flow and internal and external temperatures will determine the volume of emissions generated and ultimately how long the turbine will continue to operate. On the other hand, by bottom-up machine learning methods, the gas turbine would have to run for a century before producing enough data to begin training. Instead, Siemens researchers used methods that required little data in the learning phase for the machines. The monitoring system that resulted makes fine adjustments that optimize how the turbines run in terms of emissions and wear, continuously seeking the best solution in real time, much like an expert knowledgeably twirling multiple knobs in concert (Sterzing & Udluft. 2017).
  3. Common Sense. If we could teach the machines to navigate the world using common sense, then AI would be able to tackle problems that require a diverse form of inference and knowledge. To be able to understand everyday actions and objects, keeping track of new trends, communicate naturally and handle unexpected situations without much data would pave way for human-like AI systems. But what comes naturally to humans, without much training or data is unimaginably difficult to machines. There is progress and certain organisations have launched programs like Machine Common Sense (MCS) program and have invested a lot to make this a reality (Zellers, Bisk, Schwartz & Choi. 2018).
  4. Making better bets. Humans have the ability to routinely, often spontaneously and effortlessly, go through the possibilities and act on the likeliest, even without prior experience. Machines are now starting to mimic the same type of reasoning with the help of Gaussian processes. Gaussian Processes are probabilistic models that can deal with extensive uncertainty, act on sparse data, and learn from experience (Rasmussen & Williams. 2006). Alphabet, Google’s parent company, recently launched Project Loon, designed to provide internet service to underserved areas of the world through a system of giant balloons flying in the stratosphere. Their navigational systems use Gaussian processes to foresee where in the stratified and highly variable winds aloft the balloons need to go. Each balloon then travels into a layer of wind blowing in the right course, arranging themselves to form one large communication network. The balloons are not only able to make reasonably accurate predictions by analyzing past flight data but also analyze data during a flight and adjust their predictions accordingly (Metz. 2017). Such Gaussian processes hold great potential. They don’t require huge amounts of data to recognize patterns; the computations required for inference and learning are relativity simple, and if something goes wrong its cause can be back-traced, unlike the black boxes of neural networks.

Conclusion

Machines need to become less artificial and more intelligent. Instead of relying on a bottom-up ‘big data’ approach, machines should adopt a top-down ‘abductive reasoning’ approach that more closely resembles the way humans approach problems and tasks. This general reasoning ability will help AI to be more diversely applied than ever, and in addition, it will also create opportunities for early adopters, even new organisations which were previously unable to compete with the leaders due to lack of data, will be able to apply their ideas into creating something useful and trustable. Using these techniques of abductive reasoning and top-down approach, it is possible to build AI Systems that can be trusted in any situation.