It wasn’t too long ago that Ronald Reagan famously declared Berkeley “a haven for communist sympathizers, protesters and sex deviants.” Weaponizing UC Berkeley in our national culture war is not a new phenomenon. Unfortunately, what we experienced in early 2017 was different.
When Milo Yiannopoulos came to deliver a speech in February, the overwhelming majority of students stayed home. While many vehemently disagreed with his views, most voiced their discontent peacefully, without resorting to violent disruption. But a small minority of protesters, overwhelmingly nonstudents, did disrupt the peaceful protest with violence, and I was there to see it myself. With Berkeley engulfed in flames, the polarized fringes of our national culture war rushed in to craft politically expedient narratives around the incident. Content on social media distorted our reality with incredible speed, and new narratives were amplified by domestic and foreign actors across the country and around the world.
Social media naturally amplifies voices at the fringes: passionate and provocative ideologues who confirm our biases and fill us with righteous indignation. There is little room on social media for balanced and deliberative voices. Social media companies are incentivized to care about users and engagement. Content that is head-turning and bias-confirming is the easiest way to boost user engagement and build a user base. With a little ingenuity and a penchant for chaos, internal and external actors can amplify already powerful voices on platforms and further cannibalize our politics with raucous and disagreeable discussion. In the case of Yiannopoulos’ visit to Berkeley, we watched the narrative around our school become warped, distorted and used as a wedge to divide the United States.
Finding ways to secure and defend our national conversation seemed essential to me and my roommate. As the children of immigrants, we know as well as anyone the importance of a civil and fact-based national dialogue in a liberal democracy. Last year, we built an algorithm to detect propaganda bots on Twitter. We have been working on the problem ever since and have become thought leaders in the study of internet propaganda and fake news. We have talked at Stanford University and the Aspen Ideas Festival and collaborated with the Democratic National Committee. For the last year, we have been building a program to view propaganda bots and influence campaigns from the top down. It’s become increasingly evident to us through this research that what happened in Berkeley is not unique; every major news story, especially those predicated on conflict, is amplified and radicalized by propaganda bots and armies of trolls. No political faction is immune to the basic human tendencies that these influence campaigns exploit.
At the beginning of our journey into the world of misinformation, we understood these propaganda campaigns as centrally revolving around “fake news”; if we could just computationally excise objectively false narratives, we could restore national sanity and save our democracy. While the Yiannopoulos story was distorted by fundamentally flawed facts, our research has led us to the conclusion that objectively false “fake news” is only the easiest bogeyman to identify and is not the most prevalent type of content being spread during influence campaigns. In fact, the memes and content spread by bots and trolls during the Yiannopoulos rally rarely put forward “facts” at all; instead, the content preyed upon our base human emotions.
The content being spread by bots today is like the content spread during the Yiannopoulos rally; it is not simply inventing facts, but amplifying, encouraging and sharing the raw emotion of our politics’ most radical actors. This provides a larger platform to ideas that further divide an already polarized country. During the confirmation of Justice Brett Kavanaugh, propaganda bots didn’t — for the most part — invent facts about Kavanaugh’s life or the facts concerning the case before the Senate. Instead, they amplified memes and content filled with righteous indignation as well as content calling for revenge against Democrats. Some memes martyrized Kavanaugh as a warrior against the liberal elite. Other bots sought to amplify and dubiously apply a real quote from Martin Luther King Jr. to the hearing to defend Kavanaugh. When support is feigned and virality manufactured, radical ideas can be normalized.
Fighting fake news and reclaiming our national conversation from fringe, distorted narratives on social media will not be easy. It will require an embrace of civic values and an acknowledgment of our shared humanity. But, importantly, it requires us to change the way we interact and respond to our digital political world. Of course, we must rid social media of bots, “fake news” and misinformation. But more importantly, we must build space in our political landscape for leaders to respond to powerful emotional narratives as they appear and for users to engage with political content without being beholden to the perverse incentives of social media companies to incite and inflame.
Ash Bhat majored in interdisciplinary studies at UC Berkeley and is CEO at RoBhat Labs.