In 2019, fraudsters used artificial intelligence, or AI, to mimic a CEO’s voice and swindle a company out of $243,000 in a matter of minutes. In 2020, a man named Robert Williams was wrongfully arrested because of a false AI-powered facial recognition match. And last November, Meta’s Galactica AI built to “organize science” ended up spewing alarming and convincing misinformation. These are just a few of the headlines surrounding AI that consume the media now.
The raw potential of artificial intelligence both excites and incites fear in many.
AI is on the cutting edge of academia, and businesses like Meta and Google have utilized it to gain an advantage over their competitors. The promise of a competitive edge in a capitalist market may rush the release of an AI model, but can also lead to many unintended consequences. These include, but are not limited to, discrimination, racism and unethical decisions made as a result of a models’ biased output. The consequences of these AI-influenced decisions can even go so far as to alter healthcare plans, legislation and policing.
For instance, the Detroit Police Department employs DataWorks Plus for a facial recognition algorithm that is meant to help identify potential suspects in violent crimes. The system works by searching through a database consisting of mugshots, public CCTV footage and even photos posted on social media sites.
While this technology sounds promising, the vast majority of these photos are very poor quality; not all that surprisingly, police chief James Craig noted that if they solely relied on this software, the system would likely misidentify suspects 96% of the time.
One of these misidentifications led to the wrongful arrest of Robert Williams on larceny charges, where he was handcuffed and thrown in the back of a police cruiser in front of his wife and young children.
These unchecked discriminatory AI practices eventually tipped off the Federal Trade Commission to launch an investigation. Its report, published in June 2022, found that several facial recognition algorithms were up to 100 times more likely to misidentify the face of a Black or Asian person, compared with that of a white person.
Bias and discrimination are not the only concerns about the irresponsible use of AI. Recently, Meta’s new Galactica AI, touted as a large language model built to “organize science,” instead did the opposite. Soon after its launch, users voiced their concerns on Twitter and academic forums with one user sharing a screenshot of Galactica insinuating that vaccines cause autism. The risk of misinformation being spread is particularly high, considering it came from a seemingly credible source like Galactica, which Meta claims was trained on 48 million scientific papers. However, not all of these scientific papers were peer reviewed. Meta’s mission of building a tool to “organize science” is a promising and noble one, however its early release of the model was irresponsible, misleading and, some could even say, dangerous.
In Naomi Klein’s book “This Changes Everything,” the renowned critic of corporate globalization argues that capitalism’s pressure to focus on quick profits prevents us from making socially responsible decisions. By applying her argument to artificial intelligence, one can argue that AI created for commercial profit will lead companies to release AI models irresponsibly early.
In response to our intervention with the natural world, Klein offers this metaphor and commentary about Dr. Frankenstein:
“Humanity has failed to learn the lessons of the prototypical cautionary story about playing god: Mary Shelley’s Frankenstein. According to [French Sociologist] Latour, Shelley’s real lesson is not, as is commonly understood, “don’t mess with mother nature.” Rather it is, don’t run away from your technological mess-ups, as young Dr. Frankenstein did when he abandoned the monster to which he had given life.”
By “playing god,” AI researchers have a responsibility to ensure that their creation does not harm society. A thorough review process and clearly defined ethical principles, like Google’s AI Principles, are needed to prevent AI from becoming the Frankenstein’s monster of our generation.
We cannot ignore our “technological mess-ups” and we cannot walk away from monsters which we have given life. However, the future of AI is not entirely compromised. Recently, the University of California has adopted several principles to prevent the misuse of AI-enabled tools within the UC system. These internal tools are used in areas likely to affect individual rights, including health, human resources, policing and student experience.
Implementing this governmental process was a step in the right direction, but the UC system has the power to do more to improve the entire industry. The UC governance board and Berkeley AI Research should be using their clout to call out malpractice when it occurs. They should also encourage other universities to adopt similar practices and call for legislation requiring firms to implement independent oversight bodies to reduce potential harms to society.
With nearly 5,000 computer science and data science students here at Berkeley, we must ensure that we stay informed about the impact and implications of our work amidst the rise of AI.