Seeing is not believing

git reset

Related Posts

Scroll. Double-tap. Scroll. Double-tap. Scro- … wait. Scroll back up. What did I just double-tap?

I see an old white man.

Wait, I follow an old white man?

Hovering over the image is a name: Tom Steyer. And directly underneath, is the subtext “Sponsored” and sub-subtext “Paid for by TOM STEYER 2020.”

Ah, a political ad.

My first thought is, “Wow, here’s a candidate I know absolutely nothing about; perhaps the next few minutes of inspired researching would atone for the post’s jarring disruption of my daily aesthetic feed.” A couple of searches later, I learn that the post I accidentally liked is from a “billionaire activist” who spent $7 million on ads in his first month of campaigning for president. And every day after that unintended engagement — after that harmless click I see yet another ad for his campaign. When I consult my friends however, none of them have encountered anything about Tom Steyer.

This terrifies me.

My hyperfiltered social media feeds are spurring my actions and coloring my spectrum of views. The ubiquity of these ads and their language is manufacturing the wording of my subsequent Google searches, the ordered results of which determine the information I ingest. Hearing that my peers weren’t similarly “microtargeted,” or preyed on by algorithmic determination, makes me distrust the integrity of discourse enabled by political “free speech” online.

A week ago, Twitter announced its impending political ad ban, which encompasses ads that “refer to an election or a candidate” or “advocate for or against legislative issues of national importance (such as: climate change, healthcare, immigration, national security, taxes).” Though prohibiting political advertisements unquestionably seems like the right decision for a platform with underwhelming interest in regulation, individual accounts can still share propagandized content to their millions of followers. Oops!

Disinformation isn’t new. It’s the unprecedented scale of distribution — that artificially tampers with our belief systems — that is. While companies like Twitter may claim to “ban political ads,” toxic and violent speech continues to proliferate on the accounts of politicians, who circumvent congressional ethics rules in shifting narratives by using two or more Twitter accounts. It’s presumably why Donald Trump’s feigned tweets are tweeted using @realDonaldTrump and then retweeted by @POTUS, because the latter is answerable to the government.

Our generation’s diminishing attention span, coupled with the microcosm of curated content we experience, begs the question of whether social media platforms should be more than just information marketplaces, and champion content verification. Under the premise of enabling free speech, social media platforms are optimizing for speed of uploading content rather than moderating it. When a user begins crafting an Instagram post, the content is already transmitted to servers to be published as soon as the user clicks “Done.” This content is pre-uploaded, but rarely preverified and seldom taken down if deemed disinformation.

In May, a digitally tampered video, or deepfake, of House Speaker Nancy Pelosi appearing to drunkenly stammer through a speech went viral on Facebook, YouTube and Twitter after first appearing on the individual account of a pro-Trump sports blogger. Though it was subsequently debunked, the original post garnered millions of views and shares. Even after having been invalidated, the deepfake was merely reduced in distribution instead of removed altogether.

Twitter, Facebook and Instagram weren’t conceived to be used as political platforms. They started as social sharing networks and seemingly morphed into willful enablers of targeted product advertising, in hopes that the glimpse of AllBirds during your mindless scroll will make you pause, click and buy. What they didn’t expect is that driving consumer behavior yields completely different ramifications when those same ad spaces are defining voter behavior — and the governance of our country — with selective, and sometimes fake, information.

While I implore social media platforms to fact-check every piece of content before its publication, I recognize it is an unreasonable request that trivializes the arms race between generation and detection of disinformation. Some companies like Amber are working on detection, attempting to clean up the internet by creating a “truth layer” software to embed in phone cameras to imprint a watermark on content at the point of creation in order to prevent deepfakes. But the rate at which detection tools are being developed pales in comparison to the egregious amount of fake content that is becoming exponentially easier to produce and weaponize. And when it comes to speech, natural language processing is largely still unable to classify nuances of sarcasm and malice.

While more stringent verification would be ideal, even eliminating political microtargeting would increase transparency by forgoing electioneering communication to selective audiences. Policies on political ads shouldn’t be a Twitter vs. Facebook, “ban all ads” vs. “protect all free speech,” black or white dichotomy. If these platforms continue to foster political speech, then it is their responsibility to facilitate voter registration (i.e., not deter African Americans from voting, as done in the 2016 election), inform all users of all candidate platforms and ensure factual content — because in this world, seeing is no longer believing. It’s finally time that we recognize that personalization-for-profit has evolved into personalization-for-politics.

And if I see another vapid Tom Steyer ad, I’m disabling Instagram.

Divya Nekkanti writes the Friday column on tech, design and entrepreneurship. Contact her at [email protected].