AI-based recommendation systems are used in many online services we enjoy today, including search engines, online shopping sites, streaming services, and social media. However, their growing influence over what people see and do on the internet has raised concerns about their susceptibility to various types of abuse, such as their active use to spread disinformation and promote conspiracy theories.
Andy Patel, a researcher with cyber security provider F-Secure’s Artificial Intelligence Center of Excellence, recently completed a series of experiments to learn how simple manipulation techniques can affect AI-based recommendations on a social network.
“Twitter and other networks have become battlefields where different people and groups push different narratives. These include organic conversations and ads, but also messages intended to undermine and erode trust in legitimate information,” said Patel. “Examining how these ‘combatants’ can manipulate AI helps expose the limits of what AI can realistically do, and ideally, how it can be improved.”
A PEW Research Center survey* conducted in late 2020 found that 53% of Americans get news from social media. Respondents aged 18-29 identified social media as their most frequent source of news. At the same time, research has highlighted potential risks in relying on social media as a source: a 2018 investigation** found that Twitter posts containing falsehoods are 70% more likely to be retweeted.
For his research, Patel collected data from Twitter and used it to train collaborative filtering models (a type of machine learning used to encode similarities between users and content based on previous interactions) for use in recommendation systems. Then, he performed experiments that involved retraining these models using data sets containing additional retweets (thereby poisoning the data) between selected accounts to see how recommendations changed.
By selecting appropriate accounts to retweet and varying the number of accounts performing retweets along with the number of retweets they published, even a very small number of retweets were enough to manipulate the recommendation system into promoting accounts whose content was shared through the injected retweets.
While the experiments were performed using simplified versions of the AI mechanisms that social media platforms and other websites are likely to employ when providing users with recommendations, Patel believes Twitter and many other popular services are already dealing with these attacks in the real world.
“We performed tests against simplified models to learn more about how the real attacks might actually work. I think social media platforms are already facing attacks that are similar to the ones demonstrated in this research, but it’s hard for these organizations to be certain this is what’s happening because they’ll only see the result, not how it works,” said Patel.
According to F-Secure Vice President of Artificial Intelligence Matti Aksela, it’s important to acknowledge and address the potential challenges with security of AI. “As we rely more and more on AI in the future, we need to understand what we need to do to protect it from potential abuse. Having AI and machine learning power more and more of the services we depend on requires us to understand its security strengths and weaknesses, in addition to the benefits we can obtain, so that we can trust the results. Secure AI is the foundation of trustworthy AI,” said Aksela.