daily californian logo

BERKELEY'S NEWS • SEPTEMBER 21, 2023

Apply to The Daily Californian by September 8th!

UC Berkeley, Stanford researchers find algorithms can better resource allocations

article image

PIXABAY | CREATIVE COMMONS

SUPPORT OUR NONPROFIT NEWSROOM

We're an independent student-run newspaper, and need your support to maintain our coverage.

|

Assistant News Editor

FEBRUARY 24, 2020

UC Berkeley and Stanford University researchers found that algorithms predict recidivism better than humans do, which can allow for better allocation of resources to prevent reincarceration. 

The study, titled “The limits of human predictions of recidivism,” was published as a response to a different study by researchers Julia Dressel and Hany Farid, which challenged a long-standing consensus that algorithms outperform humans when assessing an individual’s level of risk. Through this study, UC Berkeley social welfare and public policy professor Jennifer Skeem and Stanford researchers replicated and extended Dressel and Farid’s study under real-life conditions to prove that algorithms make better recidivism predictions.

“A closer look at the (Dressel and Farid) study, though, indicated that people’s predictions were elicited in a way that doesn’t represent predictions made by judges in the real world,” Skeem said in an email.

Participants in the recent study experienced two different conditions: a constrained condition and an enriched condition, according to Skeem. In the former condition, participants were told risk factors of the defendant, including their sex, age, offense and criminal background. In the latter condition, researchers provided more risk factors such as the defendant’s substance abuse history and employment background, enriched factors that are presented to judges in real life.

The factors were included in participants’ predictions of whether the defendant would reoffend. The participants’ and the algorithm’s predictions were compared to see which were more accurate.

While researchers believed the additional factors in the enriched condition would confuse human participants and decrease their accuracy, participants’ prediction accuracy remained the same in both the constrained and enriched conditions, according to Skeem and Zhiyuan “Jerry” Lin, Stanford University Ph.D. candidate and co-author of the study

“But the accuracy of our algorithms did change,” Skeem said in the email. “When more information was provided and was useful for prediction, algorithms were more accurate than people.”

Goldman School of Public Policy professor Jack Glaser said in an email that the study’s results were not surprising, but he expressed hesitancy in algorithms’ ability to avoid biases when predicting recidivism.

“Humans are smart, but our information processing has limits and is subject to lots of errors and biases,” Glaser said in the email. “If an algorithm is only predicting who will be arrested (as opposed to who actually offends), then it is vulnerable to replicating the biases (racial, age, gender) that contribute to who gets arrested in the first place.”

Skeem acknowledged in her email that the biggest concerns about using algorithms in the criminal justice system stem from fear that the risk assessment will “bake in bias.” While researchers on the study did consider these concerns, according to Skeem, this specific study does not attempt to focus on potential biases in algorithms.

The implications of this study can have far-reaching effects on the allocation of resources for those who need them. According to Lin, risk assessments can determine whether or not the defendant will fail to appear in court, for example. Algorithms are able to predict the risk, and helpful resources to encourage appearance in court can be directed to individuals who have been determined to need them.

“If algorithms do a better job of preventing false positives (giving a harsher sentence to someone who is not going to recidivate), that will help mitigate mass incarceration overall,” Glaser said in the email.

Contact Julie Madsen at [email protected] and follow her on Twitter at @Julie_Madsen_.
LAST UPDATED

FEBRUARY 24, 2020


Related Articles

featured article
A new machine-learning algorithm is being developed to predict traffic flow and traffic patterns on California highways.
A new machine-learning algorithm is being developed to predict traffic flow and traffic patterns on California highways.
featured article
featured article
Campus researchers contributed to a study that found a widely used algorithm in the health care industry contributes to racial, socioeconomic and gender prejudices despite being built to overcome bias.
Campus researchers contributed to a study that found a widely used algorithm in the health care industry contributes to racial, socioeconomic and gender prejudices despite being built to overcome bias.
featured article
featured article
UC Berkeley’s International Human Rights Law Clinic released a study Thursday examining the impact unsolved homicides in Oakland and barriers to support services have on the families of victims from underrepresented communities.
UC Berkeley’s International Human Rights Law Clinic released a study Thursday examining the impact unsolved homicides in Oakland and barriers to support services have on the families of victims from underrepresented communities.
featured article