“There are things that should not be done … I don’t see the difference between things that people shouldn’t do and things that computers shouldn’t do. What should not be done is to allow computers to get out of control.”
In May 1977, Lee Dembart wrote a New York Times article titled “Experts Argue Whether Computers Could Reason, and if They Should,” in which Joshua Lederberg, professor of genetics at Stanford, was quoted saying the above. Dembart’s article is an eerie read. It’s a cautionary tale from the past — an investigation into the potential future of artificial intelligence, or AI, at a time when humanity stood on the precipice of enormous machine power. Nearly 50 years later, we are seemingly past its turning point. AI conveniently performs “human” tasks and fills in our roles in making many decisions in our everyday lives. And yet, the risks people were fearful of back then have started to manifest in the current day.
When Dembart’s article was published, many AI scientists were making promises of computer chess champions and machine translation. However, Dembart reflected the public’s popular notion of disbelief. “So far, neither has been accomplished successfully,” he wrote, “and neither is likely to be any time soon.”
Both computer chess champions and machine translation have become reality since then, but Dembart was correct about the speed of such technologies’ development. It took decades to carry out these goals. However, developmental trends have shifted in more recent years. New AI technologies are popping up everyday. And yet, with all this new technology — all these promises finally coming true — our fears remain the same as they were in 1977. We are still asking the same questions about whether we should continue developing and whether we should restrict AI’s abilities. We are still reconciling with the ethics and dangers of what we’re creating without looking back.
It’s easy to question how we let ourselves get to this point, and how we developed such complex machinery over multiple decades without precluding their risks. The history of artificial intelligence is more than just the historical development of technology. It’s also a history of humanity’s self-image — and our fear in replicating what we believe to be our artificial counterparts.
The origins of AI can largely be traced back to the British mathematician, Alan Turing. In 1950, he published a paper titled “Computing Machinery and Intelligence,” which laid the groundwork for investigating machine intelligence, and speculated on how we could mimic human thought in computers. It was never a question of whether machines could think for themselves, but rather if they could effectively participate in an imitation game, or in other words, if, when conversing with a person, the computer could convince them that they were speaking to another human. This is the basis for the Turing test, which most AI machinery is measured by in terms of how “intelligent” they are. If the machine can pass as human, then it has passed the test.
The origins of AI can largely be traced back to the British mathematician, Alan Turing.
Following Turing’s paper, scientists and AI pioneers grew optimistic in imagining the possibilities for computers. This optimism was due in part to the recent success in the creation of powerful electronic digital computers after World War II. With the conception of AI beginning to circulate, then came the first “boom,” a rise in production and talk of AI’s future. Scientists and engineers started to attempt to produce machines that could make decisions: solving mathematical equations, processing strings of words and performing intellectual tasks such as playing chess and other logical puzzles.
In 1956, Dartmouth hosted a conference that would become integral to the development of artificial intelligence. Computer scientists John McCarthy and Marvin Minsky invited top collaborators to AI’s development to the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI), hoping to spark conversations about AI. The term itself — artificial intelligence — was coined here by McCarthy. However, the conference fell short of expectations: The invitees generally agreed upon the possibility of artificial intelligence, but no one could agree on standardized methods for the field. Despite this upset, the dawning of AI was coming to fruition. And the U.S. and foreign governments would soon bask in its light.
The USSR launched Sputnik one year after the DSRPAI. In response, the U.S. government formed the Advanced Research Projects Agency (ARPA, and later DARPA) to fund research and development of military technologies. Artificial intelligence fell under this technology. ARPA would become the lead funder for AI development for the coming decades, shifting AI’s conceptualization to something of immense value — a means to gain power.
With higher funding and federal support, optimism flourished in the coming years. AI technology advanced exponentially, pushing the limits of possibilities further and further past the scope of the public’s vision. From 1957 to 1974, not only were the realities of AI developing, but so too were the expectations — the latter far faster than the former. Conclusively, the first wave of AI evolution is characterized by immense pride in our abilities and a competitive drive to further our power. But such pride was vulnerable to disappointment.
The promises computer scientists had made were falling short. The public and critics of AI technology were starting to distrust the future AI pioneers had painted. In response to the slowing production of AI, scientists overcompensated with more false promises — more ways to excite the public. As AI scientist Hans Moravec stated, “Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn’t in their next proposal promise less than in the first one, so they promised more.”
Enter Dembart’s 1977 New York Times article: a relic from the first bust in AI evolution. Defeated in the face of our own limitations, humanity turned away from our imagined futures. Promised so much and granted so little, the public grew distrustful of scientific and military claims. Many computer scientists at the time claimed that the technology produced in the first wave of artificial intelligence was “still millions of times too weak to exhibit intelligence.” As such, the very foundation on which Turing conceptualized AI was seen as near impossible by this time. We decisively could not replicate ourselves.
The reemergence of AI years later is attributable to scientist Edward Feigenbaum, who devised a new plan to bring AI back to life. Rather than generalizing intelligence like his predecessors, Feigenbaum sought to create machines with domain-specific knowledge. Prioritizing just one realm of human intelligence to implement—medicine, chemistry, and so on—these machines would be less “whole” replications. They could not learn new rules and expand their decision-making abilities. Nonetheless, they were far more feasible than earlier promised machines. But as clever as this might’ve been, Feigenbaum would still be promoting a product seen as risky rather than promising. As such, he’d have to find a way to approach funding. And this boiled down to fear.
Feigenbaum turned to Japan’s flourishing technology scene. From 1982 to 1990, Japan invested $400 million into a program based loosely on Feigenbaum’s expert systems approach. The project was called the Fifth Generation Computer Project (FGCP), and Feigenbaum contributed greatly in the hopes that the U.S. government would recognize Japan’s growing AI technology. He wished to spark fear through foreign threats, simultaneously praising the project’s prospects in Japan and raising alarms about Japan’s growing tech in the United States.
Mirroring the disappointment in the United States in the previous decade, Japan’s project ultimately dwindled to an “impossibility.” And yet, Feigenbaum indirectly came out victorious. The United States immediately started funding new organizations and programs such as the Microelectronics and Computer Technology Corporation (MCC) and the Strategic Computing Initiative (SCI). The second wave of AI development was in motion.
For a long while, there were continuous critiques. In the late 1980s, while the reality of artificial intelligence manifested, many scientists carried with them the memory of AI’s initial broken promises. What artificial intelligence had in store was nowhere near the fantasies that were conjured for decades. And in 1996, the promise made in the ’50s to create an unbeatable artificial chess player was embarrassed and ridiculed. Deep Blue, an advanced expert system built specifically for chess, was put to a match against chess grandmaster Gary Kasparov. Kasparov beat the machine four games to two. Only in a 1997 rematch did Deep Blue finally beat Kasparov, fulfilling long, long overdue potential.
What artificial intelligence had in store was nowhere near the fantasies that were conjured for decades.
Yet again, we were promising ourselves some perfect replication — an even better version of ourselves. But through disbelief and hard truths, we were proving that impossible. We could not reach our own limits for literal decades. Artificial intelligence was reduced to stuff of science fiction as many had predicted—something that was once a future, then a threat, then a boring reality.
The slow encroach of AI into the twenty-first century is incomparable to the massive surge that has taken place in the past couple years. Just a decade ago, no machine could truly recognize languages or images at a human level. Nowadays, they can create entirely sensible academic papers and wholly fake but near-perfect images. So how did this jump occur? And why has there been such a large problem of ethical AI as of late?
Rockwell Anyoha in Harvard’s Science in the News predicts that we as humans have not gotten any smarter about coding since before this recent resurgence. Rather that — according to Moore’s Law — the memory and speed of computers doubles every year. As such, with faster and more efficient computers, “the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem.”
AI presents allure across finance, health care, transportation and national security sectors. These incentives drove forward research and production, but there soon came a shift. AI started to fall into the hands of the public — in smartphones, in cars, in web browsers. In 2011, Siri was introduced as a smartphone personal assistant. Then a chatbot named Eugene was said to have passed the Turing test in a University of Reading competition in 2014. And then, in 2022, a sleuth of AI image generators opened to the public. Among them, the infamous software ChatGPT launched after decades of development. Now dominating discussion on AI ethics, ChatGPT is an internet phenomenon that grants practically anyone with internet access an overwhelming idea of the powers of AI. Writing academic papers, forming code, answering practically any question, ChatGPT teeters on the edge of AI performing tasks we ask of it, yet don’t fully understand ourselves.
It’s ironic, however. ChatGPT is a language model built upon the foundation of mountains and mountains of text. It searches for patterns through what’s called a neural network: a type of software inspired by the way neurons in animal and human brains signal each other. This internet sensation — this controversial, contentious software — was built upon us. We turn to it, and yet we despise it.
ChatGPT and other modern AI technology are practically all that we were promised in the 1950s and beyond. Now that it’s in our hands, we fear the consequences. Countries across the globe are forming governmental and intergovernmental plans and goals to guide the development of artificial intelligence. Looking back on the first AI conference in 1956, where we failed to establish these ideals, it’s easy to see how we ended up here. We are scrambling for policies and standardization to slow and control the power of AI before it far surpasses our own. Not necessarily in the science fiction sense — nowhere near total global domination — but in more subtle, more real ways. Now that AI is reality, we see what the consequences truly look like. Image forgery, plagiarism, replications of human work without any regard for the effort and soul put into the originals.
There’s the risk of losing humanity in making some mechanical extensions of ourselves. With the path we are going down, we are endowing computers with human intelligence, and as such, relinquishing our human morals and passions. With policies and limitations put on the development of AI, we can moderate the risks that come with it.
But who would’ve thought that once we finally achieved actual “intelligent” machines — after decades and decades of trial and error — that we’d be questioning why we’ve done this all along. We have artificial intelligence in the palm of our hands in 2023, and yet we are still attempting to perfect the questions posed in 1977.
“How do we know what we know?” wrote Dembart. “What does it mean for a human to know something? What does it mean for a computer? What is creativity? How do we think? What are the limits of science? What are the limits of digital computers?”
We keep proving that we don’t yet have the answers.