Attackers are manipulating fake news detectors. How can users distinguish between real and fake content?
With the industrial journey towards digitization, the web is alive with information and misinformation alike. Lately, it is getting trickier for users to identify authentic facts. With fraudulent content flooding around social media channels, many platforms are increasingly deploying fake news detectors.
Social media giants like Facebook and Twitter have already added warning tags into posts to extent awareness. Thus, users can flag online articles as false or misleading – based on the headline and content of the story. Besides, the latest methodologies have considered user engagements and network features patterns.
These additional measures are for the content of the story in order to strengthen the accuracy of posts. However, there have been growing concerns around the fake news detectors being manipulated – based on user comments. This can show or flag genuine news as counterfeit and fake content as real.
The researchers from the Penn State’s College of Information Sciences and Technology have recently indicated it as an attack approach. This could provide the adversaries with the capability to influence the news detector’s assessment of the story – even though they are not the original author of the story.
They assessed the quality of artificially generated comments after in-depth analysis to see if humans are able to differentiate them – from the ones generated by real users. Basically, the rivals can use random social media accounts for posting malicious comments.
Many industry experts are actively working on solutions that can read and find relevance to the article. Simply put, a significant effort to fool the detector! Lately, fake news has been promoted deliberately to widen political divides, undermine people’s confidence, or creating community division.
Undoubtedly, the attackers can easily exploit such dependency on users’ engagement to manipulate the detection models. Often spammers post malicious comments on articles, highlighting the importance of having robust fake news detection models.
And thus, rather than misleading the detector by attacking the content or source of the story, these commenters are capable of attacking the sensor itself. Clearly, the digital era sees a rise in adversarial attacks. This demands the need for an advanced framework to optimize, generate, and add malicious comments.