How not to do AI
Rebecca sighed as she put down the phone. An hour ago the old inventory processing system went offline, most of the development projects were far exceeding their budgets, and her department was dangerously low on software engineers. Artificial Intelligence was the last thing on her mind right now. However, once Steve had set his sights on something, she didn’t want to be the one that got in his way.
Parts Unlimited did not have the necessary skills in-house, so whatever AI project they were going to do would have to be outsourced. Somehow she managed to reallocate 50K of next year’s budget and set out to find an AI consultancy company. She called around and found some suitable candidates, each of which she asked to make an offer for an AI Proof of Concept. Once the offers came in, she selected the one that seemed to be the most promising.
Not much later, a team of experts showed up in the office. They interviewed a few managers, identified a use case in Supply Chain Management, got a hold of many data, and got cracking. After a few months, they presented their results to the executive board. In static tests, their model was able to identify manufacturing flaws in the assembly line with a 15% higher accuracy than the factory workers. The PowerPoint looked tremendous and contained many graphs. Steve was impressed and complimented Rebecca for a job well done. As soon as the presentation finished, the AI experts left as fast as they had arrived. The PowerPoint, and the 100-page report that came with it, were filed in the depths of the company’s SharePoint, never to be reread. Rebecca got back to trying to fix the inventory processing system, and the entire ordeal was swiftly forgotten. Parts Unlimited ended up with a manufacturing yield much lower than its competitors, who were able to harness the power of AI. In the next earnings call, Steve had to explain why his investment in AI did not result in a higher defect detection rate.
Does this sound familiar?
Much too often, this is precisely what happens to a company’s first endeavor into AI. They come in with high hopes and expectations, inspired by an endless stream of media hype, only to be disappointed by the outcome of the project.
There are a few reasons for this. First, AI is the cherry on top of the IT maturity cake. To meaningfully integrate AI in a company’s day-to-day operations, a high degree of maturity in their business processes and data collection is essential. Second, offline AI proofs of concept are inherently set up to fail from the beginning.
Let’s first have a look at why companies choose to do AI POC’s. Although the concepts of AI have been around for decades, widespread implementation has only become feasible in recent years. New technology means much uncertainty and a limited amount of highly specialized experts, which means high investment costs. Companies are afraid to overcommit to something they believe might just end up being a hype but are also scared to miss out on competitive advantage through AI. The proof of concept is perceived as the perfect middle ground: proof to stakeholders that the company is on-board with the latest technology innovations without having to make a considerable investment. From a business perspective, this makes total sense. However, the current standard in AI POC’s is not able to prove value.
Proofs of concept are typically set up similarly. Based on the availability of data, a use case is chosen that will be the subject of the project. Machine learning experts do a thorough analysis of the data and transform it to meet the requirements they need. Next, a model is chosen that’s able to provide the desired result. The model is trained with the data and is then typically used to either classify existing data or predict missing or future data. So far, so good. The problem lies in what is done with the result. Often it ends up in a static report without any real-time connection to new data to allow the model to be refined further. It’s impossible to measure the result of the AI POC because it is never actually used. The lack of integration into the existing IT environment means the result cannot be used to increase efficiency, improve business processes, or add business value to the company.
However, there is a better way. A way that can deliver tangible business value without taking up half of the company’s IT budget. The Proof of Concept is used to estimate the total value a particular technology can bring to your organization; however, because of the reasons discussed before, it isn’t valuable itself. The Minimum Viable Product, or MVP, provides a better alternative. Its primary focus is not proving anything but delivering a small part of the value, value that is the reason you are investing in this technology in the first place.
The focus of your AI MVP should be end-to-end integration with existing data services instead of an isolated offline report. Let’s consider the Parts Unlimited story. Instead of investing all their time in the creation of a model with very high accuracy, it would have been much more valuable if the consultants used their model in one of the assembly lines, working alongside human workers to detect manufacturing flaws. The theoretical accuracy might have been lower, but it’s proving value from day 1, and everyone involved has experience on what it is like to work with an AI. From this first step, it’s much easier to scale up your MVP and iterate to a more accurate model and more assembly lines.
Companies have to realize that if they want their AI endeavours to succeed, they need to invest as much or more into internal training and preparation, of data, systems and people, as they invest into the AI experts themselves.
Turning AI into a competitive advantage requires a vision and, consequently, requires serious commitment. It’s the only way to set up your AI project for success and the creation of long-term business value.