News. Risk scores, generated by algorithms, are becoming a common factor in sentencing in the US. Computers crunch data like arrests, type of crime, and demographic information, and a risk rating is generated. The idea is to create a guide that’s less likely to be subject to unconscious biases, the mood of a judge, or other human shortcomings. Similar tools are used to decide which blocks police officers should patrol, where to put inmates in prison, and who to let out on parole.
The story in Bloomberg is about Richard Berk, a professor in statistics at the University of Pennsylvania. He began working with Philadelphia’s Adult Probation and Parole Department in 2006. At the time, the city had a big murder problem and a small budget. There were a lot of people in the city’s probation and parole programs. City Hall wanted to know which people it truly needed to watch. Berk and a small team of researchers wrote a model to identify which people were most likely to commit murder or attempted murder while on probation or parole.
Since then, Berk has created similar programs in Maryland’s and Pennsylvania’s statewide parole systems. In Pennsylvania, an internal analysis showed that between 2011 and 2014 about 15 percent of people who came up for parole received different decisions because of their risk scores. Those who were released during that period were significantly less likely to be re-arrested than those who had been released in years past. The conclusion: The software was helping the state make smarter decisions.
These days, Berk works with the Norwegian government, who gathers an immense amount of information about the country’s citizens and connects each of them to a single identification file. He wants to predict at the moment of birth whether people will commit a crime by their 18th birthday, based on factors such as environment and the history of a new child’s parents.