A multi-campus research collaboration featuring UC Berkeley professors that will explore the intersection of artificial intelligence and human life officially launched this week.
The Center for Human-Compatible Artificial Intelligence was founded by UC Berkeley faculty across a variety of technical and social sciences, and researchers at Cornell University and University of Michigan. Rather than a physical building, it will consist of “people, whiteboards and computers…and maybe a few robots too,” according to Stuart Russell, a campus electronics and computer sciences professor and lead of the project.
“We should redefine the goal of artificial intelligence from creating pure, undirected intelligence to creating beneficial intelligence,” said Max Tegmark, a physics professor at MIT and president of the Future of Life Institute, which donated grants to the center. “We feel that the center has identified some of the most important questions of our time to study and they’re bringing in a fantastic team of people to study them.”
The center was funded primarily by the Open Philanthropy Project through a $5.5 million dollar grant, with access to additional funds from the Future of Life Institute and defense organizations among others.
Artificial intelligence would, in the near term, allow people to overcome language barriers, acquire household robots for help and own self-driving cars that would make transportation easier, noted Bart Selman, a collaborating professor from Cornell University.
One of the initial goals of the center, Russell said in an email, was to develop a framework for “value alignment” through which robots could learn aspects of human value systems.
This attempt at value alignment, Russell pointed out, is complicated by the fact that humans are not perfectly rational. There is also, he noted, the problem that many humans have “less than ideal values” begging the question: “how do we ‘filter out’ such influences without presuming to ‘know what’s best’?”
In addition to professors in fields such as robotics, the collaboration will utilize game theory and other facets of economic and psychological research. Tom Griffiths, a campus cognitive science professor and center collaborator, said social scientists understand that extensive research is required to ensure that AI contributions to society are positive.
Russell said that depictions of AI in film and media misconstrues the true nature of AI’s potential.
“Most Hollywood AI plots rely on spontaneous malevolent consciousness, which is nonsense,” Russell said. A more perplexing problem, he said in an email, was the potential for “massive disruption of employment if machines start doing most tasks, and possibly the gradual enfeeblement of human society as in Wall-E.”
The center is currently in the final stages of the first application of their studies, according to Russell, examining AI systems that have no incentive to switch themselves off.