Home Forums Article Talk How AI Can Go Terribly Wrong: 5 Biases That Create Failure

This topic contains 0 replies, has 1 voice, and was last updated by  miniming 2 months, 2 weeks ago.

  • Author
    Posts
  • #3371

    miniming
    Community Member

    How AI Can Go Terribly Wrong: 5 Biases That Create Failure

    AI is a game-changing technology for companies, but it can go terribly wrong.

    As AI-based systems become more critical to companies, we all need to understand the issue of bias in AI. The biases of AI can result in reputational risk, poor results and outright errors. This article will enable boards and senior executives to ask the right questions about the five dangerous biases of AI.

    1) Human Bias

    One reason bias exists in an AI-based system is that the data we feed AI systems is biased. That data is often biased because it comes from real-world business decisions made by humans.

    In other words, humans are biased, but we’ve never looked carefully at the decision-making bias of our employees. Now, because we are looking at what comes out of an AI system, we are horrified to see that AI appears to be biased, but it was us, humans, all along.

    For example, a bank may discover that its AI-based loan evaluator is approving fewer loans to minorities than others. When that AI-based loan evaluator is compared to historical loan approvals to minorities, it’s highly likely that the percentage of approvals will be the same.

    This discovery of bias in AI can be a good thing. It surfaces bias that exists within a decision-making process and provides the company with an opportunity to course-correct. Everybody wins.

    MORE FOR YOU : pg

    2) Hidden Bias

    One of the most insidious biases in AI is hidden bias—meaning unintentional bias that may never be seen or discovered.

    Take the example of a highly qualified person, not making it through the screening process for a job. This candidate had what looked to be a perfect resume for her target company. However, the company’s AI-based HR system rejected her, and she never even made it to the first interview.

    At one point, the candidate met representatives of the company at a job fair. When they reviewed her resume, they were excited about her background and invited her to interview with the company.

    The candidate explained that she had been rejected several times before and wondered what was different this time. It took a while, but the company finally discovered that the candidate had a BA in Computer Science while the AI was searching only for people with a BS in Computer Science. As a result, the system determined—incorrectly—that she wasn’t qualified.

    Unless that candidate had highlighted the problem to the HR team, they never would have known their system was rejecting perfectly qualified candidates.

    The “not knowing” is the scary part. For that company, and for that HR team, if this bias hadn’t been brought to their attention, they would have gone on their merry way, missing out on highly qualified candidates and not knowing why.

    Companies need to periodically put humans in the loop of important decisions to uncover any potential hidden biases.

    3) Data sampling bias

    When we train an AI system, it needs good data. Sometimes the data fed into the system has a sampling bias, causing the AI to become biased.

    In one example, an AI system that was being trained to understand natural language exhibited gender bias. This system was fed new articles that caused the AI to think, “Man is to Doctor as Woman is to Nurse.” In this case, the data provided had this bias in it, and the AI system learned from that bias.

    In another example, Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes.

    Once again, the good news is that these biases can be teased out and eliminated once discovered. A human needs to be part of the process and look for biases.

    4) Long-tail bias

    Long-tail bias happens when certain categories are missing from the training data. For example, let’s assume the AI is doing facial recognition and encounters a person with lots of freckles. It’s likely the AI won’t know what to do with that image. They may be categorized as black or white or brown, or even as something not even human.

    When an AI encounters something for the first time, it often gets it very wrong. In one example, an image-recognition system was shown a picture of a stop sign with stickers on it and labeled it a refrigerator.

    This happens to be one of the obstacles to the implementation of autonomous vehicles as well. AI trained the way we are training AI today doesn’t know what to do when it encounters something rare or unique that it hasn’t seen before, like a paper bag blowing across the road.

    5) Intentional Bias

    Intentional bias may be the most dangerous of all. Nefarious actors could seek to attack AI systems by intentionally introducing bias into them. Not only that, but those actors will do everything they can to hide the bias they have introduced.

    Think of this as a new dimension of a cyberattack. Imagine you are training an AI system for your company to optimize your supply chain.

Login so you can join the discussion! Login Register