UC lab creates artificial intelligence to identify online conspiracy theorizing

Photo of Rallygoers at a Donald J. Trump for President rally in Minneapolis, Minnesota.
Tony Webster/File
The increased accessibility to large amounts of information due to social media and the internet has increased the difficulty for individuals to discern between conspiracy theories and conspiracies, said Timothy Tangherlini, Cultural Analytics Lab co-director. In response, the lab has created artificial intelligence capable of making this distinction in online conversations. (Photo by Tony Webster under CC BY 2.0.)

Related Posts

The UC’s Cultural Analytics Lab has developed artificial intelligence, or AI, that identifies when online conversations reflect conspiracy theorizing.

Social media and the internet have allowed for more accessibility to a large amount of information. According to UC Berkeley Scandinavian professor and Cultural Analytics Lab co-director Timothy Tangherlini, however, it is difficult for individuals to tell if information online is based on conspiracy theories, which are fictional narratives created by multiple people that can cause harm in society.

In response to this problem, the Cultural Analytics Lab developed AI capable of determining whether online conversations are based on conspiracy theories or real conspiracies, which are factual, “malign” events that are intentionally hidden.

“People are taking harmful, real-world action on rumors that are predicated on conspiracy theories, which thrive online because of how easy and fast it is to reach people,” Tangherlini said. “The system we developed is important in the way that it can identify these theories that impact people’s decision-making and pose a threat to society.”

According to Tangherlini, this AI has been applied to online conversations about the COVID-19 pandemic, anti-vaccination movements and “Pizzagate” — a conspiracy theory about the leaked emails of the Democratic National Committee in 2016.

The AI uses an algorithm to identify an online conversation’s narrative framework, or the terms that reflect the people, places and things discussed in the conversation, according to Tangherlini. It then creates a narrative framework graph depicting the relationships between these terms.

Through comparing the narrative framework graphs of online conversations about the Pizzagate conspiracy theory and “Bridgegate,” a real conspiracy involving a political operation launched by the administration of former New Jersey governor Chris Christie, the researchers identified two “telltale signs” of conspiracy theorizing in online conversations, Tangherlini said.

“Firstly, graphs showed that it took seven years for the online conversation about the real conspiracy to develop, while it took one month for the online conversation about the conspiracy theory to develop,” Tangherlini said. “Secondly, when terms were removed from both graphs, the real conspiracy’s graph still had one strong network of relationships between the remaining terms, while the conspiracy theory’s graph fell apart into many small networks of relationships.”

According to Cultural Analytics Lab lab co-director Vwani Roychowdhury, this demonstrates that online conversations based on conspiracy theories develop quickly and connect unlikely terms together, making it easy for their network framework graphs to break apart.

Roychowdhury added that the AI’s ability to identify conspiracy theorizing could serve as a warning system that alerts authorities to potential actions individuals might take due to online conversations based on conspiracy theories.

“The AI’s ability to summarize online conversations about certain topics into graphs makes it almost like a mirror that reflects society,” Roychowdhury said. “Policymakers and health care professionals can utilize it to understand what people are talking about and how they can possibly intervene to prevent harm or misinformation from spreading.”

Contact Annika Kim Constantino at [email protected] and follow her on Twitter at @AnnikaKimC.