Computers Are Not Your Friends: the Iowa Caucus, the Shadow App, and the End of Faith

When Alexandria Ocasio-Cortez said AI could be racist, it almost burned down the internet. Smug dudes in baseball caps were hooting and hollering and falling over themselves to laugh at the ridiculous idea that a computer system could hold human values. AOC was right. It’s one of the predominant issues in AI right now—an artificial intelligence is built from human datasets, and the selection of those datasets is done by a human, and that means there’s a chance to program human biases into an AI. 

In 2018, researchers at MIT created a “psychopath AI” called Norman, named after Norman Bates. They exclusively fed Norman horrifying data: car crashes, dead bodies, mutilation and destruction. Norman came out fucked up. Not everything is as dramatic as turning an AI into a serial killer, but we’re seeing similar issues everywhere: facial recognition cameras—predominantly trained on datasets of white men—continue to not recognise black women. There’s something we need to acknowledge if we’re going to have healthy democracies: technology is not impartial. It is made by people and used by people, and it is as capable of bias as those same people. 

We’re currently seeing this at the disastrous Iowa Caucus—the Shadow App that delivered miscounts was made by a secretive company that took funding from Hillary Clinton, Joe Biden, and Pete Buttigieg, and a number of its staff are former Hillary staffers. In one particular caucus, Shadow took Bernie Sanders’ 116 votes, compared them to Buttigieg’s 73 votes, and came out with the same number of delegates. There’s not enough there to say it was intentional, but there’s more than enough to spur conspiracy theories, to destabilise trust in our institutions—to make millions of people around the world shrug and say “eh, fuck it, what’s the point?” and never show up to vote. 

I don’t think the Shadow team called a Ratfucking Meeting and drew out plans to Ratfuck Bernie; I think the Shadow team worked through unconscious biases that would level out the playing field, because their guy is a frontrunner but not the frontrunner, and wanted to see their guy win. Because they’re people, and people have biases, and the machines they make often carry those biases, even when they don’t know they’re doing it. 

We tilt towards people we like. Hell, I’m doing it right now: I like Bernie, and I’ve sat down and tried to be professional and make sure everything in this article is as objective as possible, but extricating the self is hard. I think I’ve succeeded, but if the internet has got a surplus of anything, it’s folks who are ready to loudly disagree. The least I can do is say: I’m a leftist, and that probably changes the data I give you, whether I know I’m doing it or not.  

Maybe somebody on the Shadow team did fuck up, honestly, without bias. That’s where the tech world seems to be leaning on this whole circus. Shadow tried to do a very complex job using limited funding and an extremely short timeframe (two months and $60,000 is nothing in Silicon Valley terms, especially to find a solution to electronic-fucking-voting), and they may well have just dropped the ball. If you want a fun experiment, bring up electronic voting with a group of policymakers, then with a group of engineers. The general consensus from techies is that we’re just not there yet and we can’t guarantee safe or reliable systems, but politicians all over the world are rushing to implement it anyway. The issues with Shadow seem pretty clear-cut, but it’s based on a relatively small dataset, and they might’ve just not considered that. They just didn’t scale their tool correctly, and God knows it wouldn’t be the first time a startup failed to scale down effectively. We wind up with the same problem: we trust our tech too much. We trust it like it’s a fortress and not a matchstick palisade. 

In the UK, a group with strong ties to the LibDems launched a ‘tactical voting site’ that leant heavily LibDem, recommending them as the tactical vote even in strong Labour constituencies. GetVoting claimed impartiality, claimed to be just the data, but it ended up making wildly misleading claims during one of the most crucial elections in recent history. In the end, the Libdems split votes across the UK. Did GetVoting do it on purpose? I think there’s a stronger case there than with Shadow: the results are further from reality, the funding links are tighter. It doesn’t matter: in the end, nobody won.

We often talk about datasets and AIs and applications as though they spring into existence fully-formed from cracks in the earth; we live in an age of perfect miracles, and we trust them with our lives. 

That trust is killing us. 

Sometimes it’s malice, sometimes it’s incompetence, sometimes it’s something more gentle and strange and human that’s hard to put a name to. The end result is the same. Technology can be liberating and empowering, but that same power is dangerous if mishandled, and right now we’re a bunch of drivers who refuse to admit that we’ve blown a tyre; refuse to admit that it’s possible for tyres to blow; drivers who are careening down State Highway 1 with our dicks in our hands screaming that our car can drive all the way to heaven.  

Leave a Reply

Your email address will not be published. Required fields are marked *