Fake news, untrustworthy tweets, and all manner of rumors have recently made headlines in news, specifically during the COVID-19 pandemic. Both the World Health Organization and the US Surgeon General have called for a crackdown on COVID-19 misinformation. With COVID numbers remaining high and the potential for a vaccine mandate for large businesses, the importance of accurate, verified information is more evident than ever. A solution to this problem may lay with innovations in artificial intelligence.
Stevens researchers worked alongside AI expert Dr. K.P. Subbalakshmi to tackle this problem using artificial intelligence. The team was able to train the algorithm to recognize fake news by using thousands of tweets and performing a “stance detection” on it. Previously, algorithms assumed that users agreed with the links and sources they were sharing on social media. Now, with stance detection, the algorithms are able to differentiate whether people agree or disagree with the posts they are sharing using linguistics. According to Dr. Subbalakshmi, “Stance detection basically compares two pieces of texts and says whether these two pieces of texts are in agreement with each other or not. This is particularly useful when you are trying to understand if a tweet that includes a link to a news story is fake or not.”
By using this logic, the team used over 24,000 Twitter posts, using stance detection in order to label them into data sets that were either supportive or dismissive of the articles and news stories they were attached to. These articles were also compared against reputable sources to determine if they were credible or fake news. They then used this database of information in order to test and train a new AI—one that would use the cues from the training posts to determine if the tweets are spreading misinformation or not.
The AI does not determine whether the articles or tweets are factually correct, as doing so would require a constant update of new research, news, and events which is particularly hard to accomplish during a pandemic. Instead, it uses the linguistics from the tweets it was trained from to find the “stylistic fingerprints” of fake news. This includes less scientific and more emotional, or bombastic, language, which is described in the dataset used to make the algorithm. Factors such as publication, length, or authors can also play a part.
The team saw great success with this new AI, as it has been more successful than others of its kind with about an 88% accuracy rate. This is a significant improvement from the previous AI tools used for the same goal.
When it comes to the use of AI in stopping misinformation, the future is certainly bright. Particularly with the combination of AI and linguistics, there are many potential aspects to look forward to. As Dr. Subbalakshmi explains, “There are quite a few exciting things already going on with linguistics and AI: AI can summarize large documents into manageable bites of information, AI can generate language that looks very real, it can translate between different languages, detect emotions from written/spoken words, etc. It is poised to do a lot more including detecting hate speech, detecting language-related diseases in people (Alzheimer’s, depression, suicidality) and much, much more! It is a very exciting area of inquiry.”
The team has also noted that using this technology for videos would be an area for future research, as the current work is based solely on text. This research project has proven to show the ways in which AI may be able to improve the health and safety of our world today.
Be First to Comment