Let’s start with a hypothetical example.
A large airline wants to predict when plane engines need servicing.
This will save it a fortune, avoiding costly downtime from taking working planes out of service for scheduled maintenance while spotting issues before they become a problem.
To do so, this company is keen to take advantage of machine learning -- one of the tools of artificial intelligence -- which it has heard can learn to spot the early stages of a problem based on past engine data.
So, it identifies someone in the IT team who is keen on machine learning to solve the problem.
They find a relevant-looking model online as a starting point and refit it to the problem at hand.
They use data on past failures to train their model. They productionize it.
After a while, it starts to flag problems where there are none.
The company worries it can’t trust the model, confidence in investing into machine learning projects wanes, it abandons it and returns to its legacy, scheduled airline maintenance regime.
This sort of thing happens all the time. Both the actual cost and the opportunity cost of AI disillusionment can be huge.
The problem is that AI is being built by people who don’t really understand it. As more money is invested and bigger returns expected, there is a real need to professionalize AI.
Five Amateur AI pitfalls
Let’s look at where things go wrong without professional standards, and how to avoid them.
1. You Ask The Wrong Questions
"How can we apply AI to this?"
Questions like this presume AI is the answer and may send you in the wrong direction. Instead, businesses need to ask,"How do we reach this goal or solve this problem?" The answer may or may not be AI.
2. You Find Anyone In Your Organization With An Interest In AI
"The members of the IT team are technical; they’ll be interested in a new AI project. Let’s get them on it."
It’s easy to underestimate the skills required.
A programmer who’s completed an online machine learning course is not automatically an expert.
Machine learning in practice is complicated, involved and demands a lot of contextual understanding and hard-won experience, as well as technical expertise.
Unless you’ve actively hired such people, they probably don’t exist in your team. The guy who built a neural network in his bedroom may well learn over time under expert guidance, but he shouldn’t lead the first AI project.
3. You Find A Model To Get Started With
"Has someone built something similar that we can download and use for inspiration?"
The best, most successful models are designed from the ground up with the problem in mind. If it’s not designed for a specific situation or task, it can only ever be an approximation.
Previous work can provide inspiration, but the one making that call needs to be someone who understands the needs at hand and has the experience to make informed decisions about incorporating elements from commodity or existing models.
4. You Train Your Model With Whatever Data Is Lying Around
"We’ve got loads of data -- let’s see what it can tell us."
Many enterprises may have access to good models, but they ruin them with bad training regimes, teaching the AI to learn from human biases. A common error is to only use training data from successful outcomes. Humans are notorious for highlighting when they got things right and instinctively want the AI to learn from successes. But AI also needs to learn what failure looks like.
5. You Let It Loose In The Real World Without Validation
"That works -- let’s productionize it!"
Machine learning isn’t a software program that does the same thing every time, constrained by the rules a developer coded.
It is continually learning and has the potential to become more accurate as more data is ingested.
If it is taught incorrectly, it will start to develop biases.
Even once the AI is up and running and delivering insights, expert monitoring is still needed to spot new biases before they become a problem.