There are three critically flawed assumptions many businesses make about AI and ML models. These businesses are often completely unaware they are making these assumptions, and yet act on them on a daily basis. To the degree and extent, your company acts on these assumptions can cause irreparable damage sometimes to the collapse of the business itself.
This article should help you or your company to (again) be aware of
- What ML is
- What the assumptions people often make about machine learning
- The kind of action you can take
I know many of these concepts are rudimentary in machine learning. You’ve heard them before, and most likely know them inside out. So, if you’re deep into Data Science, you probably don’t need them. That said, many Data Scientists find they need to explain complex topics to those outside the field (like explaining to a CEO or even CTO), so this might come in handy. Our tendency as humans is to overlook the basics, and it’s in the basics where most problems come from.
The whole point of AI Models is to gather real-world data and use that real-world data to optimize current practices. The less any ML model reflects reality, the less it gives accurate data.
Different issues can cause trouble with AI models
- Technical issues.
- Human input
- The changing world
It’s Alive!! (Sorry, Frank, It isn’t)
There are those who look at AI as a kind of monster like Frankenstein’s monster. Power it with electricity and it’ll work. It’s just sewing all the different body parts together. You sew in different algorithms and different parts into the model and it will work effectively.
“Sorry, Frank – AI isn’t alive like Frankenstein.”
You’ll have to wait for the next sci-fi film to see that happen.
Yes, at the fundamental level AI and Machine Learning models are able to improve themselves by continuing to refine the algorithms in place. There’s a huge caveat though. ML models are only able to improve ones specifically coded into them. That, and nothing more. We, as humans, understand this and then make natural assumptions that work in the natural world, but not in the technical world. The assumption is basically this: Because AI and ML models can improve themselves, they are also able to organically and instinctively improve themselves.
Based on these assumptions companies expect that the ML model will adapt to changes. This just isn’t so. If the changes aren’t hard-coded into the ML model, it can’t adapt to them. Over time, more and more changes in reality happen. Changes that aren’t accounted for in the model. The ML model then starts to drift from reflecting reality, resulting in erroneous data output.
Following this to its logical conclusion, if a company is expecting the ML model to feed it correct data, and the data isn’t correct then the company is making decisions based on incorrect models which could result in serious repercussions.
Plug ‘n’ Play (Then watch it break)
Maybe an even bigger assumption is that ML models are like machines. (It has the word machine in it, right? So it must be like a machine.) Businesses consider ML models like a car or a toy. Wind it up, stick in the key, turn it on and it will work. It just needs a tune-up every year (or maybe a little more often than that).
The thing is, the ML model is learning, adapting, and changing in a very specific and unique way. Your car, your fridge, and even your computer apps aren’t doing this. They aren’t adapting to you or the environment. That means you can use them and they will work the way they are supposed to. With ML models, because they are continually adapting and changing, they need Continuous Integration/ Continuous Delivery (CI/CD). On one side the model is always reflecting a moving and evolving target of the world. On the other side, the ML model itself works to improve efficiency and extend capabilities in an automated way – adding new features, business regulations, deeper logic, etc.
That’s just so they deliver relevant, useful, and updated information. Add onto that the necessity that these models need updating and optimizing just to work. Just like your Operating system or apps you use, the ML models need someone to update them for security, bugs, and UI.
Basically, there needs to be a team of people watching and working with the model on a continual and ongoing basis.
But I’m only Human (not AI)
There are hundreds of ways that AI can be used in any situation. The key is that ML models constantly attempt to reflect real life. The only way for it to be possible is when groundbreaking new technologies would be invented and there would be a technological revolution larger than the current one. Until then, ML models will always be playing catch up to reality – for one simple reason. Reality is constantly changing. ML models can only produce results from what the humans themselves programmed into them. They cannot learn from the unexpected. And this is what most companies fail to realize. Because of the constantly changing reality, there needs to be someone (yes an actual person – a data scientist – implementing those changes into the AI model.)
Take the example of public transport. For the most part, the public transport system knows how many people will enter and exit on given days at given times. It is just the flow of work and life. With rising fuel prices more people start using public transport. This is an unexpected condition not accounted for in the model. Now, there is more congestion in the transport. The time between stops takes longer and the model doesn’t work the way it was supposed to. They may have ML models to adapt to weather, holidays, and road construction. But since there hasn’t been a significant rise in fuel for over 15 years, it is unlikely that this condition was coded into the ML model.
It is these kinds of situations, and much smaller than these, that is happening on a daily basis. This is what the Data Scientist needs to watch out for and adjust again and again. That’s one side of the model.
There is also human input from the other end. Let’s say the driver has to manually operate the announcement of stops. At each stop, he presses a button saying the name of the stop. This is for recording how many people enter and exit. I have been on buses where the driver has forgotten and has jumped over many stops. Put into an ML model, this kind of human error needs to be assessed and corrected.
Needing more attention than a goldfish
What making an AI/ML model really is like
You’ve heard it all before. We need to get it out there fast and furious. Ready, fire, aim. Get your minimum viable product. Go to market as quickly as possible and Just Do It! If you approach deploying ML models using this kind of mindset, you’re going to end up with more headaches, more costs, and a lot more loss.
It’s like setting up a camera and taking a beautiful photo. The more time you spend making sure you’re taking a good shot with lighting, angle, lens, aperture, etc. the less time you need to spend editing the photo for it to give you the look you want.
That’s what creating a good ML model is like. You can’t brute force the model. Let the Data Science team take their time and get quality datasets from quality data. They need to remove the unnecessary data and feed you clean data. This is only the first 10% of the model. Taking it slowly here speeds up the process down the road.
Then they need to do it again.
Deploy, analyze, assess and tweak.
This is the monitoring and maintenance of the ML model.
10% of the time is in making the model. The other 90% of the time is in monitoring and maintaining the model. The key to remember though is that even in the 10% you can’t rush it. It needs to be 10% of high quality.
Get in the groove and make it smooth
To get this right you’re going to need to get a team of Data Scientists who know what they are doing.
Not all Data Scientists are the same. You need the right ones to handle the specific field that you are in. There are roughly 120 different disciplines within Data Science alone. Choosing the right team and the right people will help you make the right
If you want help in choosing knowing how to proceed in Data Science you can talk to us.