What AI Can and Cannot Do

From Financial to Public Services there are limits to what AI can and will ever be able to do

Shiny and New

Imagine walking into your dream home. The rooms are light and spacious, the finishings to your taste and you hurry to make an offer. Once you move in, however, you discover that the beautiful exteriors hide a mess of wires and pipes that are a struggle to maintain. The manufacturer has stopped making the light fittings and you’ve no idea how the washing machine got down the stairs. What was once shiny and new, is in need of repair through repeated use. 

The longevity of products depends on how well they are made. Machine learning and AI tools depend on the quality of data. Surveys suggest 60% of development time is spent preparing it. Our experience working with FinTech companies specialising in tools for global equity markets, is more like 80%. Even then there are constraints on what is possible. 

Predictive trading tools follow the rules that govern stock exchanges. These place constraints on possible outcomes, which helps the accuracy of forecasts. Thereafter, clients must choose how much data to subscribe for and how much computing power they will buy to speed calculations. Automated traders gain a minute advantage by knowing a little more, slightly faster than competitors. This can translate into significant profits. Support tools for human traders, by comparison, benefit from better aesthetics and do not require the same speed.

Investing is another matter. Decisions are made taking into account company fundamentals, news and sentiment. Investor timelines vary and the longer they run the more factors and chance occurrences will influence an outcome. This is not unique to financial services. In many industries there are complex conditions that mean particular outcomes cannot be predicted. 

Hype and Over Automation 

Arvind Narayan and Sayash Kapoor coined the term AI snake oil for when companies peddle solutions to problems that AI does not and cannot solve. They cite examples of software predicting whether criminals will reoffend, with one such product having a 137 question survey. Even with a complete set of answers, the AI model still knows nothing of an individual’s motivations and the chance events that may impact the likelihood of reoffending. As a result, the authors claim AI fails. 

The many examples the authors give of predictive AI failures are human errors in data science. Data is incomplete or poorly understood, engineers unfamiliar with statistical techniques and executives unaware of the limits of computer science. The resulting products are human knowledge and its accompanying frailties transposed into silicon, rather than artificial intelligence. 

The genuine problem AI Snake Oil uncovers is bloated promises from commercial companies to naïve customers. These are often in the public sector and under great pressure to cut costs. There is nothing unique to AI in this and snake oil salesmen are as old as commerce itself. 

How much it matters that predictions don’t work depends on the industry. Hedge funds are capable of making billions from being right 52% of the time. The judicious scaling of bets turns a small advantage into big wins. Being wrong almost half the time is a cost of being in business. In fact if you are right more often than this, then you are not taking enough risk.

In contrast, services that determine people’s life chances, cannot be delivered based on predictions with a 52% success rate. Companies that rush into automating processes that are untested, or have been shown not to work, will compound the problem. Examples include the embedded racial bias in products predicting likely reoffenders, or the age bias in automated hiring software. The pressure to perform causes this problem, rather than a failure of artificial intelligence. 

Caution with Copilots 

Using a general purpose AI model for specific tasks has a high risk of failure. Wherever possible, you should adapt foundational models with your own data. If you haven’t figured out what your data can tell you, be cautious about conclusions drawn using probabilistic models. You must have a theory for why an outcome is correct and the data science will prove or disprove it. Unexplained and unlikely patterns rarely make for good products.

This applies when using copilots. These are foundational models, fine-tuned for types of tasks, but not necessarily the ones you want doing. If the results don’t seem right, then they probably aren’t. All the data in the world on reoffenders for instance, does not stop applications replicating bias in the arrest records of law enforcement officers. 

Managers should not delegate untested processes to human assistants. They may, however, ask their team to find an answer to a perplexing question. The manager does not accept this answer without questioning and will grill the team on what they did and why it matters. The challenge with delegating to AI is that few people possess the skills to question data scientists. Whether this expertise is in-house, or hired through ChatGPT or an AI copilot, always question whether outcomes make sense for your business.

There is a race to adopt AI to develop solutions that human intuition has failed to find. This leads to hype cycles and automating roles that are not ready to be lost. If you make data-driven decisions where you only need be right a majority of the time, then machine learning with skilled data scientists is for you. 

If there are legal or regulatory consequences of being wrong, you may automate what already works. Then work with a partner such as MSBC, which has the data science, technical skills and access to computing power to develop your solution. Buying off-the-shelf answers to problems you cannot solve yourself may cause further problems. If so, like the dream homeowner facing an unexpected leak, cross your fingers and hope that those who built behind the scenes did a professional job. 

Thought Exercises

  1. Do I understand what the data in my business is capable of telling me?
  2. Do my data-driven processes deliver consistent, desirable outcomes?
  3. Have I bought AI to solve a problem I could not otherwise solve?

Leave a Reply

Your email address will not be published. Required fields are marked *