Credit: Dreamstime

Artificial intelligence (AI) is in transition, both as a technology and in how it’s being used. Companies are increasingly bringing AI pilots out of the test labs and deploying them at scale, and some are seeing significant benefits as a result. 

Regardless of any uncertainty surrounding AI, ignoring its potential poses the risk that companies doing business the old way will go under.

For many organisations, however, deriving value from AI may be elusive. Their models might not be tuned. Their training data sets might not be big enough. Customers may be leery. There are also concerns about bias, ethics, and transparency. 

Pushing an AI initiative into production before it’s ready, or expanding an AI strategy beyond an initial phase before properly vetting its results can cost a company money, or worse, send it in a direction detrimental to the business.

So how do you know whether an AI project will transform or sabotage your company? Without hard ROI numbers, companies have to get creative with ways to know for certain. Here’s a look at how IT leaders and industry insiders gauge value of AI.

Mature vs. groundbreaking technologies

Measuring the business value of any initiative or technology isn’t always a linear calculation. AI is certainly no exception, especially when degrees of maturity and business potential are taken into consideration. 

Proven and predictive variables — like data mining, cost and training savings, investment and the ability to facilitate new uses — influence decisions when it comes to acceptable ROI, but putting a degree of trust in the technology, no matter how new or established, is essential.

At NASA’s Jet Propulsion Laboratory, for instance, the key factor to measure an AI project’s ROI is technology maturity.

Some AI use cases are at a high level of maturity, says Chris Mattmann, chief technology and innovation officer at NASA JPL. Take for example automating business processes.

“The boring stuff that every company has, we have too,” he says. “So we automate a lot of things like ticket processing, search, data mining, looking at contracts and subcontracts using AI.”

JPL uses commercially available technologies to do this, including DataRobot and Google Cloud. To determine whether a particular technology is worth investing in, the organisation looks at whether it will save costs, time, and resources, Mattmann says. “It’s mature, so you should be able to show this.”

For technologies at a medium level of maturity, JPL looks at whether the technology has the ability to enable new use cases, and at what cost. “For example, we’re going to Mars, and we have a thin pipe for deep space telecom,” he says, and today, there’s enough bandwidth to send about 200 pictures a day from Mars to Earth.

“Those brilliant Mars rovers we send have pea-sized brains in them,” he says. “They’re running iPhone 1 processors. We only put things in space that are radiation-hardened, where we’re confident they can withstand the deep space environment. The chips that we know perform well are those older chips so we don’t do advanced AI or ML on the rovers.”

But the Ingenuity helicopter, which was originally intended simply as a technology demonstration and wasn’t core to the mission, had a Qualcomm Snapdragon processor on board, an AI chip. “That demonstrated to us that it was possible to have newer chips and do more AI,” he says.

Here, the AI will enable new use cases not currently possible. For example, instead of sending back 200 images a day, the rover could analyse the images itself using AI and send a million text captions back to earth to describe, for example, that there was a dry lakebed in a particular direction. “We could get more visibility with text than we do with images today,” Mattmann says.

Finally, for the most cutting-edge, experimental AI technologies, the measure of success is whether they allow for new science to be done, and new papers to be written and published.

“There’s a cost to training and building models,” he says.

Companies like Google and Microsoft have ready access to giant volumes of training data, but at JPL, the data sets are hard to acquire and require PhD-level experts to analyse and label.

“At NASA, our costs to train a new AI model are 10 to 20 times that of commercial industry,” says Mattmann.

Here, new technologies are coming along that could allow NASA to create AI models with less manual labelling. For example, generative networks could be used to create synthetic training data, he says. Deep fakes, but for the benefit of science.

AI measurement and its spheres of influence

When there’s no direct way to measure the business impact of an AI project, companies will mine data from related key performance indicators (KPIs) instead. These proxy variables typically relate to business goals and can include customer satisfaction, time to market, or employee retention rates.

Case in point is Atlantic Health System, where patients are at the heart of every decision, says Sunil Dadlani, its senior vice president and CIO. So, in many ways, the return of investment in AI is measured by looking at the improvements to patient care. 

These patient-focused metrics include reduced length of stay, faster time to treatment, faster insurance eligibility verifications, and faster prior insurance authorisations, he says.

Another project involves using AI to support radiologists in examining scans. A KPI is how often radiologists are alerted to potentially abnormal findings. 

“As of April 2022, 99 per cent of our radiologists have reported using AI to analyse more than 12,000 studies,” Dadlani says, adding this has triggered nearly 600 alerts. “So physicians can address potentially serious issues as quickly as possible.”