The University of California has created recommendations to create a path toward the responsible use of artificial intelligence in future UC endeavors.
UC’s increasing dependence on the use of AI has increased its overall productivity as an institution, according to the UC Office of the President, or UCOP. However, with the implementation of AI, there is also potential for problems to arise.
To combat this, former UC President Janet Napolitano and current president Michael Drake created the Presidential Working Group on Artificial Intelligence, or the Working Group, in August 2020.
The Working Group’s final report noted that the group consists of 32 faculty and staff from all 10 UC campuses and an additional number of representatives from UC Legal and the Office of Ethics, Compliance and Audit Services, among other groups. The group’s main goal is to develop recommendations for current and future use of AI by the UC, according to UCOP.
The rising use of AI from the UC has increased overall efficiency for school programs, but has also raised ethical concerns regarding the anonymity and safety of the students, the report noted.
“Inappropriate, inaccurate, or inconsistent data and ill-considered assumptions in model design can lead to problematic outcomes, such as biased or discriminatory decisions,” stated the Working Group’s final report. “UC has put in place robust policies and guidelines for technology procurement and use, especially related to data privacy and security.”
Due to these concerns, a survey was conducted among the campus chief information and chief technology officers.
The officers’ main concerns were regarding the potential risks for “AI-enabled tools” that were at a high risk of bias, according to the press release. This was because most of these “AI-enabled tools” were not created by the UC and could result in human bias from third party sources.
The report added that these concerns led the Working Group to create four subcommittees specifying how to eliminate bias. These groups would be imposed on high-risk areas posing a threat to individual rights such as health, human resources, policing and student experience.
“In response to growing concerns over the use and consequences of AI, at least 170 sets of AI principles and guidelines have been developed,” the final report stated. “While the sets of principles vary in style and scope, a consensus is growing around key themes, including the need for accountability, privacy and security, transparency and explainability, fairness and non-discrimination, professional responsibility, human control, and the promotion of human values like civil and human rights.”
In addition, facial recognition technology was considered for use at two UC locations, according to the report.
However, ethical concerns regarding the accuracy of facial recognition, although not necessarily the specific technologies considered for use by the UC, were brought to attention, and it was shown in a 2019 study that there was a higher false-positive rate for Black and Asian faces compared to white faces.
This was one of many cases leading to the formation of the Working Group to ensure student safety would be prioritized over the potential benefits of “AI-enabled tools.”
“Keeping our students, faculty, and staff safe is and should be one of our highest priorities,” the final report stated.
Contact Nathan Saldana at [email protected].
A previous version of this article incorrectly stated that the UC system implemented facial recognition technology for policing on its campuses. In fact, the campuses had considered the use of facial recognition technology.