Hackers Can Attack Image Recognition AI

In September 2018, Wired magazine called the hacking of AI an “emerging security crisis.” One are of AI where we’ve seen big advances lately is image recognition. Some mobile devices now use it as a security measure. Manufacturers can use it to spot defects in products. Social media platforms can use it to filter obscene images. And, autonomous vehicles use it--along with other tools--to interpret their surroundings. Those are just a few examples.

Already, everyday people have found ways to fool image recognition software. Kids figure out how to pass for their parents to unlock their phones. Researchers put stickers on stop signs to fool autonomous vehicles. If these analog shenanigans can mess with AI, what kind of damage could actual hackers exact? That’s a question on the minds of many tech professionals as AI becomes more widely adopted.


Potential Ways of Hacking IR

The Wired article linked above explains that “we had assumed machines see in the same way as we do, that they identified an object using similar criteria to us.” That assumption has set the stage for disaster.

Programming image recognition AI starts with training. The team that develops the software feeds it thousands of images with identification. Over time, the machine gains enough information to draw its own conclusions about new images.

Hackers could attack during this training phase. For example, they might introduce images that teach the AI incorrect information. Or they could teach it to unlock systems for certain faces that should not actually have access.

Others might attack in the course of the AI’s regular use, like in the examples above. They could present images that are just close enough, or “off” enough to confuse the machine and get a desired outcome.


How to Protect Yourself From Image Recognition Hackers

You probably already have cyber security measures in place to protect your data and your network. It only makes sense. Here’s the catch, however. Some of your security measures use AI, and that same AI could get compromised. As we know, hackers strive to get one step ahead at all times.


Cyber security experts, in turn, are hard at work to stay one step ahead of the hackers. Researchers at MIT conducted a “white hat” hack on Google’s image recognition AI in order to discover its weaknesses. They altered digital photos, pixel by pixel to learn how to convince the software that a photo of one thing was something else entirely. Similarly, a research team at the University of North Carolina gathered photos of people from social media. The researchers then used those photos to fool facial recognition software.

Within your company, your employees probably use profile photos for accounts like Slack, Gmail, or internal social networks. Those photos give hackers one more potential way to use your employees’ identities. For any accounts your company uses, require strong passwords and two-factor authentication. Prohibit employees from using public wifi with work devices. As with you company website and CMS, keep any AI systems updated.

Most importantly, don’t fall into a false sense of security. Put back-up systems in place that require some level of human intervention with image recognition. For example, if you rely on AI to scan inventory, make sure a human being spot checks the results.


The bottom line is, use the same caution that you probably already use in your approach to network security. As technology evolves, threats will inevitably evolve as well. Continue to stay abreast of the latest technology advancements, and always make security your priority.