Three Signs to tell if the AI you bought is Snake Oil

--

I’ve worked in Fintech and Insurtech for some years now, and it always excites me how the financial services industry digitizes and innovates much more rapidly than other industries (Healthcare, I’m looking at you!).

Banks and Insurers have always been savvy about the value of data, analytics and statistical modeling — with agelong applications from data-driven credit scoring and loan underwriting, to automated claim assessment and fraud detection. AI and Machine Learning came naturally as the next big technology to transform the industry.

Along the way I’ve met and worked with more than a dozen AI startups and companies, working with our customers to trial them out, and forging lasting partnerships with the ones that work.

What I’ve noticed is that for every one good AI company out there, there are many more selling ‘AI snake oil’ to the market.

Trending AI Articles:

1. Understanding and building Generative Adversarial Networks(GANs)- Deep Learning with PyTorch

2. TensorFlow Object Detection API: basics of detection (2/2)

3. TensorFlow Object Detection API tutorial

4. Basics of Neural Network

Here’s how I learned to tell them apart:

1. The AI company asks for your entire 10 years of historical data as a prerequisite to run the trial

Every AI company worth its salt would already have its own model and algorithms, built on public or proprietary data sets. The model, while not tailored to your specific environment, should still work well enough to deliver trial results based on a minimum pilot data set. They shouldn’t need to rebuild their entire AI model from your historical data.

If they do ask for it, buyer (or trialler) beware! Did you think a ‘FREE 6 Months Trial’ was a good deal? Now they’ll take the model built on your painstakingly collated data, and sell it to your competitors!

For one financial institution, we heard the pilot trial had failed for them — but the AI company went on to raise millions afterwards (presumably with the help of their newly acquired dataset to refine their model!)

2. The Working level don’t use it, or believe in it.

You know the drill. Management loves the press exposure of being an AI innovator, and the promised 100x ROI within 6 weeks (who wouldn’t?). The working level voices a weak opinion that it didn’t really work during the trial. The company buys it anyway.

Fast forward six months down the road, when one day Management asks about it and the results achieved. Folks scramble to interview users and collate a couple of ‘success case studies’.

The problem? There’s no one to interview, because:

  • No one remembers what it is or how it works
  • The ones who do, can’t remember their password to login

This scenario has played out more common than you may think. One case we heard of was an Industry solution that was sold to all 20+ financial institutions in the country! We heard that all the working level users did was login every morning, click ‘Dismiss’ on the AI alerts, and then go about their daily lives.

One customer was going to show the management dashboard to us, but realized they’d forgotten their password (they hadn’t logged in since the day it’d went live).

The Gold Standard for an AI solution is not whether Management sees the value in it, but whether the Working Level do.

The Gold Standard for an effective AI solution is never whether Management sees the value in it, but whether the Working level do. And if they do, you’ll be sure they’ll use it whether or not you ask them to!

3. When things don’t work, the AI company blames everyone but themselves

As the core technology platform serving over 380 financial institutions, we have experienced this. A lot.

In one pilot trial, the AI company blamed the data set we provided for being ‘incomplete’. (We provided it exactly per their instructions)

In another, they blamed the Users for being uncooperative and ‘resistant to change’ (These same users pioneered an industry-first innovation some months later). Since when did it become the fault of users that the solution is not useful?

The point is, by being overly defensive about external factors to justify their lack of results, the AI company missed a genuine opportunity to reflect internally what went wrong in its AI model, and how to turn it around.

And we all know what happens next. The pilot fails, and the AI company goes on to try its luck peddling snake oil to the next unsuspecting victim.

______________________________________________________

While the industry will never be free from snake oil salesmen, we should all endeavor to weed them out, to protect the integrity and value of AI and its transformational potential.

Do you have your own ‘AI snake oil’ story to share? What was the moment when you found out? Hope to hear from you in the comments below.

— — — — — — — — — —

Sebastian Tan is Regional Business Manager at Merimen Technologies, driving collaborative change and digital transformation for the Insurance industry.

Merimen Technologies is Asia’s leading Insurance Software-as-a-Service company, serving more than 140 insurers across 8 countries.

Merimen is a fully owned subsidiary of Silverlake Axis, the trusted technology partner serving over 40% of the top 20 largest banks in Southeast Asia.

Contact the author at sebastian @ merimen.com.

Don’t forget to give us your 👏 !

--

--