Algorithmic Decision-Making

0
1586

As much as we may want them to, algorithms don’t think. The algorithm that compiles Facebook’s ‘Year In Review’ photo albums wasn’t thinking when a father’s “great year” included photographs of his recently deceased daughter. Algorithms don’t think when they are approving credit cards, counting votes, or determining financial aid. We rarely realize how much blind trust we place into algorithmic decision-making. Algorithms that most of us know very little about have huge impacts on human lives.

Algorithmic transparency is the idea that algorithms that impact individuals should be made public. But as with transparency of processes in general, we must learn to balance transparency with protecting commercial advantages and civil liberties. How do we hold algorithms accountable if they are incorrect or unfair?

As long as the government employs algorithms to alter behavior or human outcomes, the algorithms used should be transparent and explained to the public. As Kate Crawford, a principal researcher at Microsoft Research, put it in an interview with The New York Times, “if you are given a score [by an algorithm] that jeopardizes your ability to get a job, housing or education, you should have the right to see that data, know how it was generated, and be able to correct errors and contest the decision.” However, accessibility to relevant code is not always the problem. We must ensure that not only are the algorithms themselves fair and equitable, but also that they are applied in a fair and equitable manner.

Potential pitfalls with transparency

Although transparency is one way to create accountability for algorithms, Alexandra Chouldechova, an assistant professor at Carnegie Mellon, argued that it is not always necessary or sufficient. In an interview with the HPR, she pointed out that companies might want to keep their algorithms secret to maintain their competitive advantages. Similarly, algorithms may involve sensitive data or may be used in security or government applications where one would want them to be protected.

However, Marc Rotenberg, President and Executive Director of the Electronic Privacy Information Center (EPIC), offered another viewpoint. In an email to the HPR, he described how, “transparency can be achieved in a variety of ways, including third party auditing. But the preference should always be for public transparency.” When it is not safe or wise for the public to have access to the code, auditing should go to a third party according to Rotenberg. Regardless, he contended, “there must always be a way to determine how a decision is made.” If that attribute cannot be established, the algorithm should not be used.

Many would argue that human decisions cannot be easily understood or interpreted either: why should we hold algorithms to a higher standard than the human brain? But although transparency is not always plausible or easy, it is certainly worth fighting for. Similarly, the use of algorithms to make large decisions can seem scary, but in the long run, it saves money and time, and has the potential to create more equitable solutions.

What’s at Stake?

Criminal sentencing is one area in which algorithms are theoretically useful, because they can help eliminate human racial bias in the criminal justice system. It is well known that the United States has a criminal justice problem. The United States is home to 5 percent of the world’s population, but 25 percent of the world’s prisoners.  Americans are incarcerated at the world’s highest rate––1 in 110 adults are behind bars. Black males constitute a grossly disproportionate share of the prison population: 37.8 percent of prisoners are black, compared to only 12 percent of American adults. Arrest rates for black people in the United States are up to ten times higher than for other races. Prisoners have little access to social mobility, creating a cycle of disadvantage to their families and communities that disproportionately affects black people.

Some of the inequities in the U.S. prison population stem from the fact that black people are regularly given much harsher sentences than other races for equivalent crimes. In Florida, defendants in criminal prosecution cases are given a judge-calculated score based on the crime committed and previous crimes. Matching scores should logically lead to the same sentence, but the Herald-Tribune found that black people get much larger punishments, and that there is little oversight of judges. Meanwhile, the Washington Post found that judges in Louisiana gave harsher punishments following unexpected losses by the Louisiana State University football team, and that these punishments were disproportionately borne by black people. Clearly, the human bias inherent in sentencing is a problem that we must address.

Algorithms can replace human implicit bias

Algorithms do not have favorite sports teams, nor do they get grumpy or have a bad day. They also haven’t been born into a system of implicit racial bias. For this reason, various nonprofits, governments, and private companies have been attempting to create algorithms to make the system of setting bail and determining sentence length more impartial to race. Using machine learning, in which historical data is used to inform future decisions and predictions, these algorithms seek to reduce human bias by assigning criminal sentencing decisions to computers.

States have already begun implementing algorithms in risk assessments to help battle implicit human racial bias. In January, New Jersey implemented an algorithm called the Public Safety Assessment. According to the Laura and John Arnold Foundation, a nonprofit which funds innovative solutions to criminal justice reform, the PSA predicts “the likelihood that an individual will commit a new crime if released before trial, and… the likelihood that [they] will fail to return for a future court hearing. In addition, it flags those defendants who present an elevated risk of committing a violent crime.”

The algorithm works by comparing “risks and outcomes in a database of 1.5 million cases from 300 jurisdictions nationwide, producing a score of one to six for the defendant based on the information.” It also provides a recommendation for bail hearings. If someone meets the right criteria, they could be released without paying any bail at all.

New Jersey seems to be succeeding with the PSA. The state now sets bail for far fewer people than it once did because the algorithm predicts who is likely to try to run away and who is safe to release. While some believe that the system allows criminals to roam free, others believe that it is a more equitable system because bail often allows the wealthy to buy their freedom. The PSA score also acts as a recommendation, and judges don’t need to follow it.

But what if a person doesn’t have access to the calculations behind the score they received? Furthermore, supervised learning algorithms depend on the data they are given. Learning algorithms are hard to control; Microsoft’s Twitter bot “Tay,” for example, which learned from the people it communicated with, spoke to Millennials and became a racist Holocaust-denier within hours. What if the historical data given to risk assessment algorithms is racially biased? These and other questions have been at the forefront of the debate regarding the ethics of risk assessment algorithms.

COMPAS: A Case Study

The Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS, is a risk assessment algorithm developed by Northpointe Inc., a private company. Since the algorithm is the key to its business, Northpointe does not reveal the details of its code. However, multiple states use the algorithm for risk assessments to determine bail amounts and sentence lengths. Many worry that this risk assessment algorithm, and algorithms like it, have unfair effects on different groups, especially with regards to race.

COMPAS assigns defendants a score from 1 to 10 based on more than 100 variables, including age, sex and criminal history that indicates how likely they are to reoffend. Race, ostensibly, does not factor into the calculus. However, ProPublica, an independent newsroom producing investigative journalism in the public interest, found that the software assessed risk based on information like ZIP codes, educational attainment, and family history of incarceration, all of which can serve as proxies for race.

There are a few ways to define COMPAS’ success. One way is to look at its false positives rate—how many people the algorithm incorrectly labels as being at high risk for recidivism. Another way is to look at false negatives— how many risky people the algorithm misses. The metric that should be applied is context-based. The justice system must decide if it would rather falsely punish more innocent people because of a bad false positive rate or let risky people roam free with a bad false negative rate.

60 percent of white defendants who scored a 7 on COMPAS reoffended, and 61 percent of black defendants who scored a 7 reoffended. On the surface, these numbers seem fairly equal in terms of output of true positives. But if you consider the false positives, you see a different story. Among defendants who did not reoffend, 42 percent of black defendants were classified as medium or high risk, compared to only 22 percent of white defendants. In other words, black people were more than twice as likely as whites to be classified as medium or high risk.

These differences highlight the tension between giving longer sentences to ensure less recidivism and giving shorter sentences at the risk of false negatives. This is a problem that we must resolve before we can regulate any algorithm, and it depends on how we define fairness. According to Chouldechova, who has studied the COMPAS algorithm, “Fairness itself… is a social and ethical concept, not a statistical one.” There is very little legal precedent and regulatory power we can hold over private companies. But if governments are going to use these private algorithms for the public interest, they must first figure out how to ensure they are fair and legitimate.

Algorithmic accountability: how do we hold algorithms accountable?

The rise of AI and algorithmic autonomy prompts the question: who or what do we hold accountable for unfairness and incorrectness? Accountability involves making a tech company report and justify decision-making done by the algorithm. It also includes helping mitigate potential negative impacts.

To hold algorithms accountable, we must test them and have mechanisms to account for possible mistakes. Hemant Taneja of TechCrunch argued that tech companies “must proactively build algorithmic accountability into their systems, faithfully and transparently act as their own watchdogs or risk eventual onerous regulation.” However, we cannot be this optimistic about tech companies. It is unlikely that corporations will go out of their way to create accountability because there is no incentive to do so. Instead, regulators must push for procedural regularity, or the knowledge that each person will have the same algorithm applied to them. The procedure must not disadvantage any individual person specifically. This baseline draws on the Fourteenth Amendment principle of due process. Due process helps to elucidate why many argue that algorithms should be explainable to the parties affected. Guidelines should be set up to hold algorithms accountable based on auditability, fairness, accuracy, and explainability.

Many of the problems that we currently see with algorithmic fairness are problems with how we define fairness, and what type of equitable outcomes we are looking for. While it is difficult to audit algorithms, it is not impossible. More difficult questions emerge concerning who should define what is fair. Defining a fair process cannot and should not be left up to computer scientists. To properly regulate technology that impacts lives, we need to regulate the choices that technologists are putting into their algorithms first. This is no easy task. But no matter what, keeping algorithms accountable through human intervention, auditing, and strong government regulation is necessary to ensure that we have equity and faith in automated decision making processes going forward.