Blog

AI Poisoning: What You Need To Know To Protect Your Business

May 20, 2021
Jeff Ahlerich

Machine learning provides tremendous benefits for automating repetitive tasks. But what happens when the machine learns (or is “poisoned” with) bad code? That’s the reality IT professionals must be wary of when using artificial intelligence (AI) and machine learning in their business. AI poisoning can be devastating to your organization and is poised to be a major cybersecurity threat in 2021.

AI POISONING: THE BASICS

At a basic level, machine learning algorithms “learn” specific tasks and can analyze and filter specific data based on pre-defined criteria. Unfortunately, cybercriminals have found a way to exploit this technology through what is known as “AI Poisoning.” Essentially what happens is cybercriminals will corrupt the training data, leading to “algorithmic missteps that are amplified by ongoing data crunching using poor parametric specifications. Data poisoning exploits the weakness by deliberately polluting the training data to mislead the machine learning algorithm and render the output either obfuscatory or harmful.”

Although humans are adept at recognizing patterns and filtering the important aspects of those patterns, machines are reliant on specific criteria and are less discerning of small variations. For example, a machine may understand the basic differences between traffic signs or signals, but in the wrong hands, these “learned actions” can be altered.

WHEN AI GOES WRONG

There have been several public cases of AI poisoning over the years. A notable attack occurred in 2016, corrupting Microsoft’s AI chatbot. The chatbot was designed to learn to interact with humans through ongoing engagement. Unfortunately, the idea backfired. In less than a day, Twitter users had successfully retrained the chatbot to shift from playful conversation to inflammatory and offensive tweets.

Experts caution that although AI can be used for good, it has the ability to cause devastating outcomes if left unprotected. “A military drone misidentifies enemy tanks as friendlies. A self-driving car swerves into oncoming traffic. An NLP bot gives an erroneous summary of an intercepted wire. These are examples of how AI systems can be hacked, which is an area of increased focus for government and industry leaders alike.”

Although enterprise organizations may not experience such life and death outcomes to AI poisoning, the damage to their business and reputation can still be quite dramatic.

DEFENDING AGAINST AI POISONING

The primary entry point for AI poisoning is in the training data used to “teach” your machines. So, it stands to reason that the tighter you control your training data, the safer you’ll be. The following tips will help:

  1. Ensure you vet any third-party partners or vendors who may be involved in training your model or providing samples for training.
  2. Establish a mechanism for inspecting training data for contamination.
  3. Avoid real-time training, if possible. This will enable you to not only vet data but discourage attackers as well.
  4. Maintain a clean copy of your data set. This will help you quickly identify any considerable changes that could be indicative of AI poisoning.
  5. Be vigilant about who has access to your training data and how it is labeled.
  6. Consider training a secondary tier of AI to help spot mistakes in your primary data analysis.
  7. Always employ human oversight on data analysis to be on the lookout for any anomalies.

As AI and machine learning advance, so should your oversight over the integrity of your data. Be sure you are taking appropriate actions to vet, verify, and validate your data at regular intervals. And be prepared to take action should you discover a potential data poisoning. The faster you take action, the less the damage will be to your data and company.

join our email list