Data and AI: Avoiding the hype and getting value for your business

Hype. Hot air. Hokum. Call it what you will, there’s no getting away from the noise around AI right now—and that can make it hard to work out what’s really going on.

Cue our most recent webinar: “Avoid the Data and AI Hype: How to Unlock True Business Potential”. Exploring everything from skills and expertise through to rules and regulations, our guests shared their thoughts on some of the biggest issues surrounding AI and data today—and how to cut through the hype and deliver genuine impact for your business.

Michael Seipp, our host and 101 Ways’ Chief Service Delivery Officer, was joined by: Dr. Janet Bastiman, Chief Data Scientist at Napier AI; Alberto Rey Villaverde, a consultant and former head of data and AI operations for Virgin Media O2; and 101 Ways’ own Technology Director, Grant Smith.

What did they have to say? Here’s a recap of some of the key questions they answered during their discussion.

‘AI isn’t exactly new, so what’s behind the recent spike in attention?’

A few different forces have converged to drive AI up the agenda, according to our guests. First of all, there’s the human component to take into account. When ChatGPT launched in November 2022, it had a huge impact. Here was something genuinely different: a very human-sounding chatbot that could brilliantly and cohesively answer (almost) any question that you might have. So, Large Language Models (LLMs) have played a major role in propelling AI into the limelight.

At the same time, the media’s fixation with AI has helped to create something of a self-perpetuating hype cycle. As coverage of ChatGPT and other forms of Generative AI has grown, so too has the number of startups claiming to be able to use that tech to reduce costs, increase productivity, and a whole lot more as well. Many enterprises have then decided that they absolutely need AI – even if they’re not quite sure what problems they want to solve with it.

So, the current buzz around AI is ultimately very similar to what we saw around big data back in the late 2000s. And, in much the same way, there’s now a growing realisation that AI isn’t a magic solution to every problem. 

‘There’s a lot of noise around AI, but it’s obviously not all just hype. So, what is it really?’

There’s no single answer to that question because it depends on everything from the space in which your business operates to its appetite for using AI in the first place. Today, the practical applications of AI in most businesses are relatively limited; typically, it’s used for simple tasks like writing an email or creating a meeting agenda, for instance.

That’s not to say that the opportunities aren’t there. AI could take the form of a predictive maintenance system that flags in advance when a piece of machinery is likely to fail, for instance. AI could even set a company’s entire strategy by analysing data and coming up with what it perceives to be the best course of action. Ultimately, AI is what you make of it and – today, at least – most companies are still a long way from getting real value from it.

New call-to-action

‘All of the hype around AI has made it difficult to have a sensible conversation about it with my exec team. How would you tackle that?’

As distracting as the hype around AI can be, it’s also a great way to get execs to engage. The challenge, said our panel, is ensuring that they understand that all of the ‘cool’ examples of AI are really just the icing on the cake.

The biggest priority here is education. If you want to do all of those cool things, then you also need to get your house in order. After all, you only need one part of your overall tech stack to be a mess for it to hamper your ability to get value out of AI. Your execs should know that a LLM won’t be able to generate a good customer service script if you don’t have your data warehouse or data models structured correctly, for instance. 

If all else fails, don’t be afraid to point to examples of failure; no-one wants to be in the situation that Canada Airlines was at the start of the year. Engineers aren’t necessarily trained to be storytellers, but creating a narrative around AI can definitely help to drive better – and more sensible – conversations around AI.

‘There’s a lot of talk about putting the right kind of foundations in place for AI, but what exactly are they?’

There are two things to take into consideration here – data and talent. 

As mentioned above, one of the big challenges around data is ensuring that it’s structured in a way that enables AI to extract value from it. But there’s more to it than that. You also need to be able to understand the legality of the way in which you can use that data, too. Can you move it around the world, for instance? Can you share it with third-party APIs? These are crucial questions to ask.

When it comes to talent, it’s typically about having the right people in the right places. That’s important not just from a technical perspective, but in terms of helping to identify the right use cases too. If you don’t have people who understand the problems that you’re trying to solve, then it’s very easy to go down the wrong path with AI.

‘What should we be thinking about from a regulatory perspective when it comes to AI?’

Explainability is vital. If you’re going to use AI to inform some of the decisions you make, then you also need to be able to understand how that solution was arrived at and why. That’s true for any application of AI, but particularly so when the results it generates impact customers or users in some way. 
As a side note on explainability, it’s worth remembering that AI isn’t always right. Unless you know an algorithm inside out and genuinely understand why a recommendation is being made, you’d be unwise to follow it without further scrutiny.

‘What’s the difference between an enabler and a direct application when it comes to AI?’

Fundamentally, they’re exactly what they sound like. Enablers are the things that help you get AI up and running, and the applications are the ways in which you might deploy it. So, talent, data, computing power, and so on are enablers. Chatbots, predictive analytics, fraud detection systems—they’re what we refer to as direct applications.

‘What have you learned about how to get the most out of AI when implementing it?’

Domain expertise is critical. Referencing a model built to mimic the pricing decisions made by seat traders for a major airline, for instance, Alberto stressed that it had needed input from an actual trader to get it to a sufficient level of accuracy. Domain expertise can give context to an algorithm, helping a machine learning model to understand why a decision has been made – not just what was decided upon.

Maintaining a healthy amount of scepticism around AI is also important. Grant shared an example of a LLM that had been created to predict house price data. What quickly came to light was the fact that – rather than limiting itself to the data it was being fed – the model was also drawing from a wide range of blogs that discussed house prices. Again, testing and explainability are key.

Looking for tips on your data and AI journey? We can help.