How to crop a gorilla in a police car

On racism and trust in AI, the economics of evil – and is it even worth addressing? Let’s take an honest look on bias in algorithms and swear off deploying AI in situations with the potential to negatively impact human lives. At least for now.

Author is the founder of Bohemian AI.

It started with a scandal: this September, it came to light that Twitter was cropping people’s pictures by skin colour. If you were lucky enough to be white, it left you in the picture; if you were unlucky enough to be black, Twitter uncompromisingly cropped you out. Not always, but often enough to infuriate a lot of people. When the public outcry finally came, Twitter corrected its cropping AI and apologized.

prg.ai published a rather defensive commentary on the subject on its Facebook page, which — in short — states that AI itself can’t be racist because it doesn’t have the sense to be, and adds an explanation of how such mistakes may happen from a technical perspective. The text you’re reading now was written as a polemic about what was said in the article, and especially about what I think should have been said and wasn’t.

Stubborn intelligence

First of all: responding to criticism of an apparently discriminatory algorithm by explaining how AI technically works makes about as much sense as comforting the relatives of car crash victims by explaining the mechanics of brakes. It’s certainly helpful to know, but it’s a bit of a distraction from the obvious “elephant in the room.”

Of course, in the case of the embarrassment regarding Twitter pictures, lives are not at stake, but trust is. This affair is just the latest in a tiresome string of scandals — think of Microsoft’s neo-Nazi bot Tay, for example, or Amazon’s infamous recruitment algorithm that gave CVs lower scores if they contained the word ‘women’s.’ What this series of ‘accidents’ consistently shows that AI engineers are not in control of their work. They can neither predict nor detect discrimination in their systems and often can’t fix it, either.

The first step to solving this problem is admitting we have a problem: AI is hurting humans. Not always, not even most of the time, but often. Wherever machine learning directly touches people — recognising faces, approving loans, recommending doctors — there is not only the possibility but absolute certainty that it will disadvantage or offend someone. And most likely do both.

Of course, AI does not do it on purpose. The makers of facial recognition software certainly don’t want to hurt anyone, but tell that to Mr Williams. The police accidentally arrested him in front of his children because no one bothered to test the police software properly. Ask the people whom a stupid algorithm judged ineligible for better healthcare coverage because they spent too little money at the doctor’s office how much they care about the motivations of its programmers. That is, if you can catch your respondents alive.

Overblown sensationalism? Maybe. Crisis of confidence? Certainly.

Amazon dropped its AI-based recruitment software in 2018, which gave lower scores to resumes of female candidates. © Bryan Angelo / Unsplash

It’s expensive around Jupiter

Some of Jupiter’s moons appear to be suitable for life, yet, to date, no astronauts have landed on them. Not because we can’t figure out how to get them there, but because nobody is eager to fund what may seem a pastime project. Fair AI is in the same situation: nobody is willing to put in the money. Therefore, if we want to get to the bottom of discriminating machines, let’s stick to the old journalistic adage: follow the money.

My first and unfortunately not last experience with this ‘economy of evil’ was this: after a few weeks of working on an innocent-looking government project outside the Czech Republic, we discovered that our AI, while working very nicely otherwise, consistently favours the wealthiest 1% of citizens. Little data was available on them, and our algorithm was incorrectly learning that rich equals good. There was a simple solution to our problem: collect as much data on the richest as on the poor. The price tag for this simple solution was a simple $50,000, more than the entire project had cost up to that point. The answer from the top management was predictable and clear: “We don’t care about the one per cent.”

In AI, the inexorable law of diminishing returns applies, meaning that each additional improvement costs more and more while earning less and less. That’s why Google, after it came to light that its image recognition software was labelling people of black skin as gorillas, removed the gorilla category from the system altogether rather than trying to fix the mistake. This is also why tech companies — including Twitter — are rethinking their facial recognition projects. The results are too bad, and it’s too expensive to improve them. At least for now.

A challenge for the end

We often solve easy problems only by thinking of solving them in the first place. For example, we don’t assume that because a patient didn’t spend a penny on medicine last year, they are healthy — maybe they are dying and just don’t have the money. We can solve moderately difficult problems with money; still, even that has a relatively low ceiling, as we have already seen in the example of the one percent that nobody cared about.

Then, we may never solve the most challenging problems. But dismissing them and thinking that it doesn’t matter anyway is the worst thing we can do for the future of AI. There are countless AI ethics committees, institutes, and associations springing up around the world today, and it will take decades to work our way toward a workable society-wide consensus.

“We may never solve the hardest problems. But dismissing them and thinking that it doesn’t matter anyway is the worst thing we can do for the future of AI.”

Until then, we can solve problems from the ‘too expensive’ category with a simple philosophical razor: wherever we exert force on humans, i.e. via law, monopoly, or existential necessity, let’s not use AI, please. At least for now. Because it’s almost certain to be unfair, and most of the time, we won’t even be able to find out why.

Wherever a person has the choice to use or not use AI, let’s be completely honest with them. Let’s be upfront about what issues are too expensive. Let everyone decide how much they mind Twitter cropping their black friends out of their photos. Let’s say upfront that we don’t know this and that, that we’re not sure, or that we haven’t tested it properly. Let’s stick a nutrition label on AI. Not an annoying one, but one that is easy to find. And let’s introduce that honesty before some stupid regulation does. Because as history shows, it may be written by people who can’t even use Microsoft Excel properly.