dammIT

A rantbox by Michiel Scholten

How Facebook got addicted to spreading misinformation


When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious. “This seems to be so oddly removed from Facebook as a product—the things Facebook builds and the questions about impact on the world that Facebook faces,” said Rumman Chowdhury, whose startup, Parity, advises firms on the responsible use of AI, and was acquired by Twitter after our interview. I had shown Chowdhury the Quiñonero team’s documentation detailing its work. “I find it surprising that we’re going to talk about inclusivity, fairness, equity, and not talk about the very real issues happening today,” she said.

“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

I get a sense of incompetence of the company as a whole, and some individuals, of even trying to fix the fundamental flaws of the way the growth-maximalisation technology and frame of mind is messing with the world.

So many smart people, so much - even internal - research outcomes that point to the destructive ways the Facebook ecosystem manipulates the information people get and thus the way they think, and so much unwillingness in actually tackling those problems.

Also, the only goal Mark Zuckerberg has, is growth. No matter what.

Thanks Maya for the find.

article header image