The First Step Act of 2018 (S. 3649) is now federal law.
The criminal justice reform law has been widely hailed as long overdue, and rightly so. But as we turn to its implementation, we urge policymakers to take care with the central provision of the bill that calls for the development of a risk assessment system for inmates.
Ostensibly intended to ease mandatory minimum sentences for some drug-related crimes, the centerpiece of First Step gives the Attorney General just seven months to develop and publish a risk and needs assessment system to determine how likely each inmate is to commit another crime or engage in “violent or serious misconduct.” The risk assessment system will then be used to classify inmates as minimum, low, medium, or high risk of committing another crime in the future.
The tool may be no more advanced than a simple survey akin to an online quiz. Or it might use machine-learning algorithms to change and adapt as variables within the system are calibrated.
Risk assessment tools must be evaluated by independent scientific researchers—not the DOJ itself or a private vendor.
Regardless of how complex or simple the risk assessment system is, there are huge obstacles to overcome before such a tool can be used without building bias into the whole system. For one thing, every tool will be the result of subjective design decisions, and thus vulnerable to the same flaws as existing systems that rely on human judgment—and potentially vulnerable to new, unexpected errors. For another, it’s not at all clear how to properly calculate risk levels, much less how to protect against unintentional, unfair, biased, or discriminatory outcomes.
Although it’s common for policymakers to try to use technology to solve difficult social problems, it’s worrisome that this law itself does not address these challenges.
Unfortunately, it seems that like other well-intentioned risk assessment laws—such as California’s new bail reform law, S.B. 10, which replaces cash bail with pre-trial risk assessment tools—First Step looks to technology to help improve the criminal justice system without building in adequate means to help ensure that the underlying technology doesn’t replicate existing problems.
There is still time to remedy this fundamental problem, and it starts with recognizing that the problem exists. As the Department of Justice implements this bill, we hope it will heed a few of our concerns about using risk assessments in the criminal justice system.
Risk Assessment Tools Aren’t Magical Fairness Machines
Too often, policymakers and even judges assume that risk assessment tools can solve thorny problems by introducing concrete rubrics. In reality, there are many problems inherent with risk assessments.
Risk assessment tools are often built using incomplete or inaccurate data because the representative dataset needed to correctly predict recidivism simply doesn’t exist. There is no reason to believe that the crime data we do have is sufficiently accurate to make reliable predictions. For example, the rate of re-arrest is often used as a proxy to predict future criminality. But it’s an inherently biased proxy because, among other things, young black and brown men are at a higher risk of arrest. It is well documented that communities of color are over-represented in the criminal justice system compared to their percentage of the population. And of course, being arrested doesn’t mean you actually committed a crime.
In addition, risk assessment tools typically assign predictive scores to individual candidates based on a comparison with the past behavior of other people. Judging a person’s future behavior by comparing their history to the past actions of others like them flouts our basic notions of individual merit and fairness.
Mitigating the Shortcomings of a Risk Assessment System
We have safeguards in our criminal justice system to prevent courts and juries from judging people based on the actions of others. We must do the same here:
- The DOJ must develop procedures to ensure that the system is transparent. The tools cannot be hidden from the public as proprietary trade secrets or confidential law enforcement investigative techniques. They must be fully available and reviewable by the public and independent data scientists, subject to multiple layers of peer testing and accountability. The public, especially the people subjected to the risk assessment, must have access to the system’s development information and to a meaningful opportunity to understand and challenge the system’s conclusions and the underlying data inputs and processes that produce those conclusions.
- Risk assessment tools must be evaluated by independent scientific researchers—not the DOJ itself or a private vendor. To the extent Congress intends the law to reduce disparate impacts on protected classes, independent research must verify that the system can accomplish that and not make the problem worse. Those evaluations should be made public.
- Data on the system’s performance should be collected and published in transparency reports so that the public can measure how well the system meets its proffered public policy goals. People subjected to the risk assessment tools must be afforded due process to challenge and confront the tool’s efficacy, and seek redress if it fails.
The First Step Act does not require any of these measures, but they can and should be built into its implementation, whether through a rulemaking process or otherwise.
The DOJ must also answer several threshold questions before it can create the system. Chief among these is how to define minimum, low, medium, or high risk of recidivism, which is a “matter of policy, not math.” The Attorney General and the independent reviewers must clarify the scope of these risk categories and what they purport to predict. They must also decide what trade-offs they can make to ensure justice and lower the massive social costs of incarceration. If the goal is actually to reduce mass incarceration in the criminal justice system, there should be more people in the minimum and low-risk categories than in the medium and high-risk categories.
The DOJ must ensure that the public can comment on the development of the risk assessment system and that defendants, defense lawyers, and the public will have the ability to analyze the system’s fairness.
These are just a few of the many concerns that must be addressed before we decide whether to jump on the algorithmic risk assessment bandwagon. We unpack these and other concerns in greater detail in our comments to the California’s Judicial Council proposed rules on the proper use and implementation of pre-trial risk assessment tools. We hope that the public agencies implementing First Step will heed our concerns and thoughtfully seek public input and feedback.