AI tools: Why AI services are fast becoming business-critical

AI tools: Why AI services are fast becoming business-critical

As a senior executive charged with keeping a bank operational, I have plenty of experience running the tightrope of managing multiple competing demands from all sorts of stakeholders.

One of the key questions I always asked myself and the team whenever we were evaluating the approach for technology was this: Is it business critical?

Now, if you put yourself in the position of the users of a specific system or process, they'll usually tell you (sometimes somewhat explicitly) that their thing is 'business critical'.

It's often a phrase that's thrown around a lot by many without due care and consideration. The office canteen ticketing and daily menu display system is highly 'business critical' when you're the chef or trying to figure out what to eat for lunch. But if it goes offline, it's probably – if not certainly – a second-order priority to worry about restoring.

Payments infrastructure. Air conditioning in the data centre. Email. These are things that generally matter and where a failure can significantly impact your ability to run anything.

Those kinds of systems are therefore invariably considered 'business-critical' and will often have different levels of priority within that specific category.

I write this because quite a number of my colleagues that I'm working with across financial services are beginning to apply this label to the AI tools their teams are working with. For the avoidance of doubt, I mean services such as (but not only) Anthropic's Claude and OpenAI's ChatGPT.

Quite a lot of use cases are beginning to involve a call out to third-party AI services nowadays. Although some financial institutions are being tied in knots by their AI governance approaches, others are steaming ahead on full power.

This morning I caught a glimpse of this 'business critical' reality myself, personally. I use these tools a lot for organising, automation and engineering. So it was quite annoying to find that my $20 'premium' Claude account wasn't functional. I got the Claude equivalent of a fail whale this afternoon when I tried to access its services.

Yes, it was probably only for 15-20 minutes. I actually used ChatGPT to do it instead (I wanted a CSV file parsed).

As an individual with no data protection, control or specific concerns about this particular task (related to a software engineering test I'm doing) it was easy to swap to another provider. That is absolutely not the case when it comes to bigger (and regulated) entities.

We're already seeing institutions opting to push the 'enterprise' button when it comes to consuming and managing their AI services (check out the NatWest news about their OpenAI agreement). I think we'll continue to see more movements here across the enterprise space. One of the CIOs I was speaking with last week commented that they were thinking of asking their LLM partner to segment a part of their data centre and specifically partition it so that it's fully dedicated to the CIOs internal customer use cases – so that if (and when) the wider services are occasionally interrupted, this instance would be (ideally) protected from that.. such is the growing inelastic demand for AI services.

Fascinating times!