Home » Nova »

13 Ways Machine Learning Can Steer You Wrong


Succeeding in today’s fast-paced business economy requires companies to harness data quickly and at scale. As the volume, velocity, and variety of data increase, it’s becoming necessary to use machine learning and artificial intelligence (AI) to sift through all the incoming information, make sense of it, and accurately predict future business direction.

Getting machine learning right isn’t an easy task, however. It takes the right expertise, the right tools, and the right data to achieve the promise of machine learning. Even with all of those factors in place, it’s still easy to get it wrong.

“Machine learning gives us a very powerful set of techniques for making predictions, but it can also lead to disastrous results if you don’t understand what your machine learning algorithm is doing,” said Spencer Greenberg, a mathematician and founder of decision-making website ClearerThinking.org, in an interview. “It is critical to study the algorithm once it has been trained, to understand how it is making its predictions, and whether what it is doing makes sense from a business perspective.”


Machine learning is sometimes viewed as a panacea to all business challenges. By failing to consider its realistic potential — and serious limitations — it’s easy for anyone to misunderstand and misapply machine learning.

“[The] stream of publicity around machine learning milestones reached [by] the big players — beating human opponents at board games, breakthroughs in medical screening and so on — gives the impression of continuous, rapid progress, and underplays the frustrations and dead ends,” said Monty Barlow, director of Machine Learning at global product development and technology consulting firm Cambridge Consultants, in an interview. “In practice, return on investment [from machine learning] can be late or never.”

Organizations are using machine learning in tactical and strategic ways, such as making product recommendations or informing strategic decisions. While the risks of making an irrelevant product recommendation are relatively low, making consistently irrelevant recommendations may fuel customer churn.

Making an ill-informed strategic decision as a result of badly applied machine learning can have serious consequences for your company.

Inaccurate Predictions

Machine learning is often used to make predictions. Examples include improving search results, anticipating movie or product selections, anticipating customer purchasing behavior, or predicting new types of hacking techniques. One reason predictions may be inaccurate may have something to do with “overfitting,” which occurs when a machine learning algorithm adapts itself too much to the noise in data, rather than uncovering the underlying signal.

“If you try to fit an extremely complex model to a small amount of data, you can always force it to [fit], but it won’t generalize well to future data,” said Spencer Greenberg of ClearerThinking.org, in an interview. “Essentially, your complex model will try too hard to hit every data point exactly, including random fluctuations that should be ignored, rather than modeling the gist of the data. The complexity of the model you are fitting must be selected based on the amount of data you have, and how noisy it is.”

You Don’t Know What You Don’t Know

There’s a shortage of machine learning talent out there. Meanwhile, machine learning is being democratized as the capabilities find their way into more applications and easy-to-use platforms that can mask the underlying complexity of machine learning. The downside of “black box” machine learning is a lack of visibility into the decision-making process.

“It’s not always necessary to understand how a model makes its predictions, [but] for high-stakes predictions, it becomes increasingly critical to understand what an algorithm is doing,” said Spencer Greenberg of ClearerThinking.org, in an interview. “If your business relies on predictions from machine learning algorithms in order to make decisions, then it’s important to ask how those predictions are being made.”

Understanding how predictions are made may require the help of a data scientist or engineer who can study the algorithm and explain its behavior to management. That way, business leaders can be confident predictions are as accurate as they expect them to be, the results mean what business leaders think they mean, and the predictions don’t rely on unwanted information.

Algorithms Don’t Align With Reality

Machine learning algorithms need to be trained, and to be trained effectively they require a lot of data. Quite often, machine learning algorithms are trained on a particular dataset and then applied to make predictions on future data, the scope of which cannot necessarily be anticipated.

“What is an accurate model on one dataset may no longer be accurate on another dataset if the underlying characteristics of the data change,” said Spencer Greenberg of ClearerThinking.org, in an interview. “That may be fine, if the system you are making predictions about changes very slowly but if the system changes rapidly, the machine learning algorithm may make very poor predictions, since what it learned in the past may no longer apply.”

Baked-In Bias

Machine learning algorithms can learn biases that are undesirable to the business. For example, a car insurance company that wants to predict who is at risk of getting into a car accident could strip out any reference to gender because such discrimination is prohibited by law. Even though gender was not included in the data set, the machine algorithm could nevertheless infer gender using correlates, and then use gender as a predictor.

“This example illustrates two important principles. First, that making the most accurate predictions possible is not always what is desirable from a business perspective, which means that you may need to impose extra constraints on the algorithm beyond just accuracy,” said Spencer Greenberg of ClearerThinking.org.

“Second, that the more intelligent an algorithm is, the harder it can be to control. Removing the gender variable is not necessarily enough to prevent an algorithm from making gender-based predictors, nor is removing all variables that you know correlate with gender, as the algorithm could discover a way to predict gender that you don’t even know about yourself.”

Bad Hires

Employment website Monster has been using machine learning to figure out what a top-performing sales representative looks like, but the results aren’t perfect yet.

“We were initially too focused on quota performance and manager rating as a metric of a high performer,” said Matt Doucette, director of Global Talent Acquisition at Monster, in an interview. “In Phase 2 of these tests, we will be looking deeper into the numbers, such as rolling five-quarter quota average, discount ratings, core versus strategic product percentage, net new business versus retention, and overall completion of performance metrics. The dataset will grow significantly, [and] the result will be more niche and can be honed or scaled.”

Inaccurate candidate profiles based on machine learning results may cause an organization to hire the wrong candidate for a position. In the meantime, valuable time is wasted screening candidates who aren’t actually a fit. If the position is revenue-generating, the bottom-line impact of making a bad hire could be significant, Doucette said.

Damage To The Bottom Line

Some machine learning systems run without oversight, while others require a close eye. Either way, it’s a mistake not to monitor your machine learning algorithms and the effects they have on your business.

“There have been such phenomenal advances in the algorithms and automation of machine learning technology that it is very easy and tempting to take a ‘set it and forget it’ mentality. However, this could lead to very damaging results for the customer and the business,” said Patrick Rice, founder and CEO of predictive analytics and data science company Lumidatum, in an interview.

“Companies need to develop better systematic oversight of the machine learning systems they have in production. At all times, everyone — not just the engineers and data scientists – should have visibility [into] how it is running, how it is responding to new customer queries, how it is changing over time, and of course, the ability to deactivate the system in the event any significant anomalies are detected.”

Microsoft’s Tay Twitterbot is a good example of training gone awry. With the help of a highly spirited Twitter community, Tay’s racist behavior became legendary. Microsoft shut it down 16 hours after introducing it, but the debacle is still being publicized.

False Assumptions

Machine learning algorithms running in fully automated systems need the ability to handle missing data points. The most common approach is to use the mean (average) value as a substitute for a missing value. According to Mustafa Bilgic, director of the Machine Learning Lab and associate professor of Computer Science at the Illinois Institute of Technology in Chicago, the approach makes strong assumptions about data, including that the data is “missing at random.”

For example, said Bilgic in an interview, “The fact that the cholesterol level is missing for a patient actually can be very useful information. It could mean that the test was not ordered on purpose, which could actually mean it is suspected to be either irrelevant for this task or it is assumed to be normal. There are approaches that do not assume the features are ‘missing at random,’ though it is unlikely that the fully automated techniques will know which features are missing at random and which aren’t.”

Machine learning algorithms, whether used in fully automated systems or not, typically assume the data is representative and random, even though a company’s data is not usually random. If the data is implicitly biased, the insights and predictions one gets from the data will also be biased. Therefore, companies should be conscious of implicit and explicit biases that exist in their data collection processes, Bilgic said.

Irrelevant Recommendations

Recommendation engines have become very common. However, some of them are clearly more accurate than others. Machine learning system algorithms reinforce what they learn. So, for example, if a retail customer’s tastes change suddenly, the recommendations may become totally irrelevant.

“This is what we call the ‘exploitation versus exploration’ trade-off,” said Mustafa Bilgic of Chicago’s Illinois Institute of Technology, in an interview. “The algorithm tries to exploit what it learned, without leaving any room for exploration [so] it will keep reinforcing what it already knows and it will not learn new things, eventually becoming useless.”

Deceptive Simplicity

Machine learning is being built into all types of applications, and there are also platforms and solutions available which attempt to mask its complexity. Because it isn’t always obvious to business users what it’s going to take to get machine learning right or how machine learning will affect the business, it’s easy to oversimplify.

“[Some organizations fail] to recognize the wide range of disciplines involved in machine learning development [and] how to manage them — for example, viewing machine learning as a purely mathematical and algorithmic undertaking, or just another software application, or the belief that recruitment of a data analyst to augment an existing software team is sufficient,” said Monty Barlow of Cambridge Consulting, in an interview.

“Teams should strive to reach consensus on how machine learning performance will be assessed during development, and then continuously measure and track its performance. [You need to] plan milestones carefully [and] be wary of sudden ‘too good to be true’ performance improvements.”

Garbage In, Garbage Out

Not all data is equally valuable or relevant. If an effort isn’t made to understand the data, machine learning outcomes may fall significantly short of expectations.

“You could have great results in initial testing, then find that your product receives disastrous results once released into the wild,” said Roman Sinayev, data scientist at network solutions vendor Juniper Networks, in an interview. “Data scientists should make sure they test their product with a wide range of unexpected variables, such as intelligent attackers, to ensure they are considering every possible outcome of their data.”

Desperately Seeking Disruption

Companies including Amazon, Facebook, Google, Netflix, and eBay have completely disrupted industries, and one of their competitive weapons has been machine learning. Other businesses are attempting to follow suit, although it’s important to assess how machine learning can best benefit the business.

“Fortune 500 companies are smart. They realize applying machine learning-type approaches is a lot about scalability, repeatability, and predictability and not about insight that takes them in a different direction,” said Zach Cross, president of Atlanta-based, technology-enabled consulting firm Revenue Analytics, in an interview. “It’s the confidence to be more aggressive than you otherwise would [be]. Two to three percent incremental improvements across a global enterprise can result in tens to hundreds of millions of dollars of enterprise gains.”

Unpredictable Outcomes

The behavior of complex systems, whether they incorporate machine learning or not, is inherently difficult to predict. Since unintended consequences can occur, even with the best of intentions and aggressive investments, the best thing to do is to minimize these effects.

“One of the best ways [to minimize negative consequences] is to start small and gradually increase the scope, access, and impact of the system,” said Kentaro Toyama, W. K. Kellogg associate professor at the University of Michigan School of Information, in an interview. Toyama said it’s best for users to create sandboxes at multiple scales (at least one per order of magnitude), and allow new systems or new changes to run at smaller scales for some time under careful observation before moving up a notch.

Blind Faith

When it comes to machine learning, details matter greatly. Users who blindly trust and implement insight from machine learning, without understanding the reason behind the insight, may expose their employers, customers, or even the public at large to risks.

“We are not yet ready to base all of our business decisions, healthcare decisions, and important life decisions [on] models that are independent of any human understanding behind them,” said Michael Schmidt, founder and CTO of machine intelligence application vendor Nutonian, in an interview. “Since users don’t have an explanation of the model, this could lead to financial crashes, people being denied loans for unknown reasons, or even patients being misdiagnosed for treatments and disease.”

In short, if you can’t explain why your machine learning model made the decision it did, you should not use it for important matters.



Related Posts

  • No Related Posts