Author is the founder of Bohemian AI.
It started with a scandal: this September, it came to light that Twitter was cropping people’s pictures by skin colour. If you were lucky enough to be white, it left you in the picture; if you were unlucky enough to be black, Twitter uncompromisingly cropped you out. Not always, but often enough to infuriate a lot of people. When the public outcry finally came, Twitter corrected its cropping AI and apologized.
prg.ai published a rather defensive commentary on the subject on its Facebook page, which — in short — states that AI itself can’t be racist because it doesn’t have the sense to be, and adds an explanation of how such mistakes may happen from a technical perspective. The text you’re reading now was written as a polemic about what was said in the article, and especially about what I think should have been said and wasn’t.
First of all: responding to criticism of an apparently discriminatory algorithm by explaining how AI technically works makes about as much sense as comforting the relatives of car crash victims by explaining the mechanics of brakes. It’s certainly helpful to know, but it’s a bit of a distraction from the obvious “elephant in the room.”
Of course, in the case of the embarrassment regarding Twitter pictures, lives are not at stake, but trust is. This affair is just the latest in a tiresome string of scandals — think of Microsoft’s neo-Nazi bot Tay, for example, or Amazon’s infamous recruitment algorithm that gave CVs lower scores if they contained the word ‘women’s.’ What this series of ‘accidents’ consistently shows that AI engineers are not in control of their work. They can neither predict nor detect discrimination in their systems and often can’t fix it, either.
The first step to solving this problem is admitting we have a problem: AI is hurting humans. Not always, not even most of the time, but often. Wherever machine learning directly touches people — recognising faces, approving loans, recommending doctors — there is not only the possibility but absolute certainty that it will disadvantage or offend someone. And most likely do both.
Of course, AI does not do it on purpose. The makers of facial recognition software certainly don’t want to hurt anyone, but tell that to Mr Williams. The police accidentally arrested him in front of his children because no one bothered to test the police software properly. Ask the people whom a stupid algorithm judged ineligible for better healthcare coverage because they spent too little money at the doctor’s office how much they care about the motivations of its programmers. That is, if you can catch your respondents alive.
Overblown sensationalism? Maybe. Crisis of confidence? Certainly.
It’s expensive around Jupiter
Some of Jupiter’s moons appear to be suitable for life, yet, to date, no astronauts have landed on them. Not because we can’t figure out how to get them there, but because nobody is eager to fund what may seem a pastime project. Fair AI is in the same situation: nobody is willing to put in the money. Therefore, if we want to get to the bottom of discriminating machines, let’s stick to the old journalistic adage: follow the money.
My first and unfortunately not last experience with this ‘economy of evil’ was this: after a few weeks of working on an innocent-looking government project outside the Czech Republic, we discovered that our AI, while working very nicely otherwise, consistently favours the wealthiest 1% of citizens. Little data was available on them, and our algorithm was incorrectly learning that rich equals good. There was a simple solution to our problem: collect as much data on the richest as on the poor. The price tag for this simple solution was a simple $50,000, more than the entire project had cost up to that point. The answer from the top management was predictable and clear: “We don’t care about the one per cent.”
In AI, the inexorable law of diminishing returns applies, meaning that each additional improvement costs more and more while earning less and less. That’s why Google, after it came to light that its image recognition software was labelling people of black skin as gorillas, removed the gorilla category from the system altogether rather than trying to fix the mistake. This is also why tech companies — including Twitter — are rethinking their facial recognition projects. The results are too bad, and it’s too expensive to improve them. At least for now.
A challenge for the end
We often solve easy problems only by thinking of solving them in the first place. For example, we don’t assume that because a patient didn’t spend a penny on medicine last year, they are healthy — maybe they are dying and just don’t have the money. We can solve moderately difficult problems with money; still, even that has a relatively low ceiling, as we have already seen in the example of the one percent that nobody cared about.
Then, we may never solve the most challenging problems. But dismissing them and thinking that it doesn’t matter anyway is the worst thing we can do for the future of AI. There are countless AI ethics committees, institutes, and associations springing up around the world today, and it will take decades to work our way toward a workable society-wide consensus.
“We may never solve the hardest problems. But dismissing them and thinking that it doesn’t matter anyway is the worst thing we can do for the future of AI.”
Until then, we can solve problems from the ‘too expensive’ category with a simple philosophical razor: wherever we exert force on humans, i.e. via law, monopoly, or existential necessity, let’s not use AI, please. At least for now. Because it’s almost certain to be unfair, and most of the time, we won’t even be able to find out why.
Wherever a person has the choice to use or not use AI, let’s be completely honest with them. Let’s be upfront about what issues are too expensive. Let everyone decide how much they mind Twitter cropping their black friends out of their photos. Let’s say upfront that we don’t know this and that, that we’re not sure, or that we haven’t tested it properly. Let’s stick a nutrition label on AI. Not an annoying one, but one that is easy to find. And let’s introduce that honesty before some stupid regulation does. Because as history shows, it may be written by people who can’t even use Microsoft Excel properly.
Next up from prg.ai
Typical Prague AI firm is young, self-sufficient, and export oriented, shows our new comprehensive study
130 companies, 11 interviews, 9 business topics. Explore all that and more in the unique study authored by prg.ai, which contains an overview of last year's most notable events on the local AI scene or articles on the future of AI or gender equality in research.
prg.ai newsletter #37
With the snowfall brought by the first days of December, the 37th issue of the prg.ai newsletter is here. And with it, a monthly overview of the most important things that happened in the local AI scene. As always, the newsletter includes an invitation to several December events or a section of selected (not only) AI-related jobs and interesting community calls.
prg.ai newsletter #36
With Dny AI, we managed to connect not only the Czech AI scene, but also different sectors of society, allowing all participants to take a look under the hood of what is happening in the world of artificial intelligence. The four-week marathon of more than a hundred events in four cities across the Czech Republic culminated in the Prague finale and industry awards ceremony. But that doesn't mean there's nothing else going on, quite the opposite!
prg.ai newsletter #35
After the summer break we welcome you to our thirty-fifth newsletter. In the October issue you will find a lot of AI news that you should not miss. At the same time, get your calendars ready, because this month is packed with interesting events that you should definitely not miss. We wish cantors and students a peaceful start to the new semester and the rest of our readers a colourful start to autumn.