Multi-campus collaboration to research intersection of human life, artificial intelligence

Center_NoahBerger_courtesy
Noah Berger/Courtesy

Related Posts

A multi-campus research collaboration featuring UC Berkeley professors that will explore the intersection of artificial intelligence and human life officially launched this week.

The Center for Human-Compatible Artificial Intelligence was founded by UC Berkeley faculty across a variety of technical and social sciences, and researchers at Cornell University and University of Michigan. Rather than a physical building, it will consist of “people, whiteboards and computers…and maybe a few robots too,” according to Stuart Russell, a campus electronics and computer sciences professor and lead of the project.

“We should redefine the goal of artificial intelligence from creating pure, undirected intelligence to creating beneficial intelligence,” said Max Tegmark, a physics professor at MIT and president of the Future of Life Institute, which donated grants to the center. “We feel that the center has identified some of the most important questions of our time to study and they’re bringing in a fantastic team of people to study them.”

The center was funded primarily by the Open Philanthropy Project through a $5.5 million dollar grant, with access to additional funds from the Future of Life Institute and defense organizations among others.

Artificial intelligence would, in the near term, allow people to overcome language barriers, acquire household robots for help and own self-driving cars that would make transportation easier, noted Bart Selman, a collaborating professor from Cornell University.

One of the initial goals of the center, Russell said in an email, was to develop a framework for “value alignment” through which robots could learn aspects of human value systems.

This attempt at value alignment, Russell pointed out, is complicated by the fact that humans are not perfectly rational. There is also, he noted, the problem that many humans have “less than ideal values” begging the question: “how do we ‘filter out’ such influences without presuming to ‘know what’s best’?”

In addition to professors in fields such as robotics, the collaboration will utilize game theory and other facets of economic and psychological research. Tom Griffiths, a campus cognitive science professor and center collaborator, said social scientists understand that extensive research is required to ensure that AI contributions to society are positive.

Russell said that depictions of AI in film and media misconstrues the true nature of AI’s potential.

“Most Hollywood AI plots rely on spontaneous malevolent consciousness, which is nonsense,” Russell said. A more perplexing problem, he said in an email, was the potential for “massive disruption of employment if machines start doing most tasks, and possibly the gradual enfeeblement of human society as in Wall-E.”

The center is currently in the final stages of the first application of their studies, according to Russell, examining AI systems that have no incentive to switch themselves off.

Shradha Ganapathy covers research and ideas. Contact her at [email protected] and follow her on Twitter at @sganapathy_dc.

Please keep our community civil. Comments should remain on topic and be respectful.
Read our full comment policy
  • Trevor Rose

    On the subject of your article about AI – as I wrote very recently – I think we should be making greater efforts to train and use the vastly more powerful and existing computers of the human brain … and until we are actually running out of capacity, OR until all human beings are out of poverty, I find the pursuit of AI to be immoral, because we’re basically saying (in an economic sense): “you don’t matter, you can starve, you can suffer … we’d rather build a machine to do what you could already do better, than spend the money training you and properly supporting you” << this is unnecessarily abhorrent, and I just can't see the point of pursuing AI while people are unemployed and just need opportunities … the best we've managed is a gigantic supercomputer consuming vast resources (in design, manufacture, configuration, operation etc.) has simulated (not created & instigated, merely mimicked) 3 seconds worth of brain activity from a tiny cluster of brain cells (not even a whole brain) … So really, we're still a long way off, and we already have human brains … where is this moral argument in the technical discussion? Don't you think it's vital?