Zoom on

Does your company really need Artificial Intelligence?

In 2016, when Amazon launched Just Walk Out, the idea seemed like a really good one. In Amazon Go and Amazon Fresh stores, customers would be able to make their purchases without going to any cash register. They’d simply use their phone to identify themselves, then an AI-supported CCTV system would scan their items, and the bill would go straight to their Amazon accounts. But in April 2024, Amazon announced they wanted to dial back this system in most of their stores. Media sources claimed that beyond various technical difficulties, the artificial intelligence required support from hundreds of Indian employees tasked with finetuning product recognition.

 

Today, overboard enthusiasm for AI is a bigger risk than in 2016, and there are few companies in the world that can compare to Amazon in terms of economic might and technological prowess.  So, learning from the Amazon experience, before launching any project linked to AI, every manager should ponder some basic questions.

 

The “zero question” that we must ask ourselves is the most obvious one, but in some ways the trickiest too. Do we really need an AI solution? It’s easy to get swept up in the AI craze these days, so we need to make a cool-headed appraisal of whether it’s truly the best path to take. This is a zero step we can’t afford to skip.

 

After that, the first question takes a deeper dive into the kind of benefits that we want to get out of artificial intelligence. Even before the explosion of generative AI, any number of artificial intelligence systems had the capacity to upgrade the efficiency of various types of business processes (making them faster and cheaper), generating economic benefits for the organization. For specific types of businesses (often tech companies) these benefits could become real strategic differentiating factors. This distinction also comes up in the field of Gen-AI, where many of the projects currently being trialed aim to produce economic benefits by boosting efficiency. Others, more ambitious but for the time being less certain, envisage a more strategic use of Gen-AI.

 

The second question is about striking a balance between expected impact and technical feasibility.  Obviously, the impact we get must be worth the cost and the effort we need to give to achieve it. But what we are prone to neglect, with all the hype and psychological pressure to adopt AI solutions, is a serious feasibility study.

 

If still today many AI projects are stalling out before scaling up, it’s because so few people have the ability to see an artificial intelligence system in its entirety. Sometimes the risk is focusing on the training component: the model, the operating algorithm and the data necessary to make it work. Yet we tend to forget that all this must be grafted onto pre-existing technological infrastructure. What’s more, the operators working in the field must accept the system. And it must dialogue effectively (or be integrated directly) with the other technological tools that constitute the backbone of company systems. All this holds true both for generative AI and more traditional predictive AI solutions.

 

 

In short, artificial intelligence must be integrated into existing business processes. If this integration isn’t planned, and people aren’t encouraged to adapt it, the system will function in a very limited, isolated way at best. For example, if we need to feed data into the system that are collected and recorded by our salespeople in the field, we need to make sure they’re willing to help, and that they understand just how crucial it is for them to report these data efficiently. Or as another example, if we set up a system for drawing up contracts, the staff in the legal department should trust it. Or if we need to input homogeneous data in the system, we have to check the data we have in hand to make sure they don’t differ semantically, and they aren’t coming from business units with incompatible legacy systems.

 

The final consideration is the need for personalization (and how to achieve it). There are basically four possible approaches (looking more closely at generative systems and Large Language Models, LLMs).

 

  • Prompting is the softest approach. What we do here is adopt a standard system and try to refine it by using prompts that optimize behavior. This can happen through prompting techniques that we use directly in the user interfaces of our system to improve the consistency and accuracy of the answers we get. In other cases, we could act on the so-called system prompts. An example of this, from outside the business world, is the racial and gender biases embedded in image generation systems. To limit the risk that a system trained on hundreds of thousands of photos of white male doctors continually propagates this stereotype, we could input an indication in the system prompt that asks it to insert diversity every time it gets a similar request.

 

  • Knowledge enrichment is a form of specialization by knowledge. We achieve this by pushing the system to formulate its responses considering only the material we input, not everything it’s been trained on. To make this possible, we need to have adequate data on the relevant phenomenon in the use case. With a bot that helps with machinery maintenance, for example, we would feed it user manuals for the specific machinery as its sole source of information.

 

  • Fine-tuning can tailor the model either to a specific domain (for example, the financial sector, by supplementing with text which speaks that language) or task (data analytics, coding, etc.). To do this, we need to “get under the hood” of the model, so this approach requires highly specialized “mechanics.” If you take this route, proceed with caution!

 

  • Building a model from scratch may seem like an extreme, expensive option, but it’s not necessarily so. We need to draw a sharp distinction here. It may actually be unachievable for Gen-AI systems and LLMs, but it might not be for effective systems like the ones that use machine learning for predictive purposes (even if they’re not getting much buzz anymore).  If we stick to the domain of the kind of AI everyone’s talking about right now, and if we’re not a tech company focused on this technology, we need to take that option off the table. But if our interest goes beyond generative options and we’re open to machine learning and more traditional predictive models, building something from scratch was, and still is, the best approach for a system truly tailored to our business needs.

SHARE ON