The Ethical Algorithm

Published
14 min read

AI’s calculating algorithms are often touted as the unbiased, exact answer, but recent cases of people failed by an automated system question this notion. How can we keep AI ethical?

Artificial intelligence (AI) is often touted as the enlightened path to decision-making free of human bias. Whereas we humans are subject to favoritism, nostalgia, racism, and other problems that cloud our judgement, AI algorithms are trusted to deliver the cold, hard truth. For this reason, the state of Arkansas entrusted allotment of Medicaid home services to an algorithm in 2016, citing that the human system fell victim to arbitrary decisions.

However, the new algorithm resulted in hundreds of people having their home services reduced. Some were hospitalized as a result.

Even worse, there was no way to challenge the system, as the state simply pointed to the algorithm as a single source of truth. After repeatedly being denied an explanation for her reduced services, a woman named Ethel Jacobs sued the state. Mid-trial, the state’s lawyers were forced to sheepishly report that there had been a calculation error and her services would be restored.

This is the problem with proclaiming AI to be universally “unbiased.” Generally, it’s humans who create the datasets and write the algorithms. Thus, it’s entirely possible that human bias or error can be encoded in a machine learning (ML) algorithm, despite being set forth as a neutral method of providing an answer. This can have far-reaching consequences.

Is Data Always Fair?

“When we’re talking about using AI to make predictions that involve individuals’ liberty, this is a natural place for questions of ethics and fairness to arise,” explains Kristian Lum, PhD. Lum is the lead statistician at the Human Rights Data Analysis Group, which applies science to human rights violations around the world. She joined the group while doing research on the uses of machine learning and predictive modeling in the criminal justice system in the United States.

Lum feels that until recently, many people assumed that better models and more technology would decrease the amount of bias in AI. However, she notes that ML-based models can reproduce the exact human biases they are meant to overcome. For example, she says, “In my work on predictive policing, we found that applying an ML model to inform police deployment decisions could exacerbate existing unfair race-based disparities in the enforcement of drug laws.”

In Lum’s example of predictive policing models, it would be difficult to obtain a truly representative, complete dataset based simply on police reports. Certain crimes are over- or underreported, certain neighborhoods are policed more than others, and police are more likely to investigate certain types of criminals. Thus, the dataset is not representative of crime rates, but rather of how police respond to and record criminal incidents. Given that these biases then create a biased legal system, it seems algorithms have a herculean task to overcome.

Lum adds, “It’s important that designers of AI-based systems, whether inadvertently or intentionally, don’t cause more harm than good. The devil’s in the details, of course, and quantifying both ‘harm’ and ‘good’ is not straightforward. It involves careful thought about who in particular might be harmed and who is benefitting from the technology.”

Ethics 101 for AI

To the end of teaching algorithms harm vs good, Nell Watson, a member of the artificial intelligence faculty at Silicon Valley think tank Singularity University, co-founded EthicsNet, which seeks to create a dataset for machine ethics algorithms. Taking a page from ImageNet, a massive repository of labelled images, Watson seeks to collect data sets for machines to learn from so they behave in a more sociable, friendly, and kind manner.

Watson takes the interesting stance that humans are “a biological AI system that happens to be implemented in a primate.” She argues that what makes us human is our cultural data set, and as we delegate more of our decisions to algorithms, giving machines this “software” will help them fit into society.

Watson explains that the aim of EthicsNet is to replicate “a well-raised six-year-old child” in terms of moral reasoning capabilities. “We’re not trying to solve the great philosophical issues,” she says. “We’re just trying to generate a system that recognizes that if you see some trash lying around in the street, it would be a nice thing to pick that up and put that in the bin.”

“If we are delegating a lot more of our decisions to our machines—economic decisions but also social decisions—they reflect on us,” she continues. “Machines need to be taught how to be socialized, how to act in a way that is likely to fit nicely in with society, and not cause alarm or consternation among other people.”

The Black Box Problem

Another concern with AI is balancing the need for explainability of a model that also includes trade secrets. In other words, how to audit a model without completely giving away how it works or the private data that goes into it. “You’re inviting people to give away their proprietary secret sauce,” Watson explains. “And I think a lot of companies would be very, very loathe to do that for quite understandable reasons.”

Lum adds that there has recently been interesting work surrounding the auditing of black-box models. Rather than revealing all the details of the system, a company could provide an API (application programming interface) so that auditors could test inputs and see the corresponding outputs of the model.

Watson also sees cryptography as a way to secure private data: “I believe that the confluence of blockchain and AI is going to be essential. I think that the 2020s are going to be concerned with this junction between machine intelligence, machine ethics (loading values into machines and giving them a bit of moral wisdom), and machine economics (blockchain and similar kinds of cryptotech-technologies).”

A More Ethical Future

This is not to say that AI is always biased or unethical. In July, the Harvard Business Review published an article entitled “Want Less-Biased Decisions? Use Algorithms,” which cites five examples of algorithms producing less biased conclusions than human counterparts despite training on data with historical prejudices (for example, an algorithm that determines if children should be placed in protective services).

However, Rachel Thomas, PhD at fast.ai, points out that the Harvard article ignores several major discussions. For example, algorithms are often implemented without an appeals process, as seen in the case of Arkansas healthcare, and encounter little supervision. Algorithms are also often employed at a much larger scale than human decision-makers because doing so is more cost-effective. Thus, an algorithmic mistake causes more widespread damage than a few humans making biased decisions.

Moreover, she argues, “Instead of just focusing on the least-terrible existing option, it is more valuable to ask how we can create better, less biased decision-making tools by leveraging the strengths of humans and machines working together.” In fact, when Google released their guidelines for responsible AI practices this year, the first recommendation was to put humans at the center of the design process.

“What we are trying to solve is not solely a technical problem,” Lum elaborates. “There’s increasing recognition that developers need to engage with social scientists, policymakers, and ethicists when designing systems that will inevitably impact lots of people.”

From the Reins of Cisco to Raising UnicornsPrevious ArticleFrom the Reins of Cisco to Raising Unicorns Disruption Starts From WithinNext ArticleDisruption Starts From Within