UC Berkeley recently launched the Artificial Intelligence, Platforms, and Society Project, a collaboration between the Berkeley Center for Law & Technology, or BCLT, and the Center for Information Technology Research in the Interest of Society and the Banatao Institute, or CITRIS, Policy Lab.
According to a Berkeley Law press release, the new project will be co-led by CITRIS Policy Lab Director and Goldman School of Public Policy Associate Research Professor Brandie Nonnecke and Berkeley Law Professor Tejas Narechania.
The project is intended to study and suggest technical and legal approaches to the increasing presence of artificial intelligence, or AI, in society, according to the project’s website. It aims to bring together attorneys with academics and practitioners to explore AI’s problems and complexities.
Narechania noted that there will be three main pillars to the project: research about artificial intelligence and its governance, engagement in academics, government and industry and education for current students and in continuing programs.
“I want to separate two different parts of it,” Narechania said. “One is, how is AI being used in law, and one is, how does law govern a wide range of AI research.”
In the first category, Narechania gave the example of a hypothetical AI application used to figure out where and when a gunshot was fired. In the second, he questioned whether or not employee discrimination laws would apply to potentially biased AI used to hire employees, perhaps prompting legal modifications or care in AI application.
Olaf Groth, a professional faculty member in the Business and Public Policy Group at Berkeley’s Haas School of Business, teaches about the relationship between ethics and AI. While Groth is not involved in the project itself, he regards the study and understanding of AI to be extremely useful.
“If we want a positive future, we should have a hand in shaping that future,” Groth said. “If we always just leave things to the people with the greatest money and the greatest skills without having a say, then we shouldn’t be surprised when powerful political interests or powerful big money decides what to do with us.”
Narechania did note that worries about the future of AI that some may have are both real and valid. He said there is a lot about the algorithms behind AI that aren’t fully understood.
According to Narechania, it’s worth acknowledging that AI has been with us for some time and will likely be with us in the future. This project aims to “grapple with this new reality,” finding methods of addressing concerns and legitimate fears.
“I see tremendous potential, but I also see very significant risk,” Groth said. “And it’s like everything else that is new… If you don’t understand its correct use and its desirable use it is bound to be misused and misunderstood.”