[ad_1]

Zuckerberg sees Facebook’s mission as “giving people a voice,” and, along with that, the benefit of the doubt. He’s maintained this stance despite understanding that people say a lot of terrible things, and that if you’re the platform where people say stuff, then people are going to say a lot of terrible things on your platform. Maybe Facebook could create a list of awful things that are unsayable on the platform, then train moderators and AI to proactively search and destroy those posts. Maybe making that list and all its dark variations along with the system for dealing with it is a tractable problem.

But there are a lot of things to be terrible about in this world. Millions of Americans “authentically” hold racist views. If someone posits that black people were better off under slavery, what should be done? Or if someone denies the Reconstruction-era campaign of white racial terrorism in the American South against black people, what should be done? Should Facebook toss every racist assertion off the platform?

Or take very different people and situations that might nonetheless pose difficult challenges in moderating speech on Facebook. Assata Shakur “authentically” believed herself to be fighting for black liberation, and many people in America agree that she was. If someone praises her role in the death of a policeman, what should be done with that post? What about approving of the armed resistance Nelson (and Winnie) Mandela employed to help defeat apartheid in South Africa?

Policing what people say on Facebook is not a problem with easy solutions. This private company has deeply enmeshed itself in society’s information flows, which makes them one of the most important arbiters of what people know about the world. Is it ideal for a private company to define its own standards for speech and propagate them across the world? No. But here we are.

The stance that the company is evolving toward seems to be a kind of sliding scale of distribution. “What we will do is we’ll say, ‘Okay, you have your page and if you’re not trying to organize harm against someone, or attacking someone, then you can put up that content on your page, even if people might disagree with it or find it offensive,” Zuckerberg said, “But that doesn’t mean that we have a responsibility to make it widely distributed in News Feed.”

Facebook can gently stop posts from being seen without actually taking them down. Call it “sort of censorship.” We don’t know precisely how the downgrading system works, but it’s reasonable to assume that it is quite sophisticated, and not likely to be a simple toggle. Think about how that applies to the old “you can’t yell fire in a crowded theater.” Facebook can decide to let you yell fire with as many exclamation points as you like, but only let a small fraction of its users hear you.

You don’t need to be a free-speech absolutist to imagine how this unprecedented, opaque, and increasingly sophisticated system could have unintended consequences or be used to (intentionally or not) squelch minority viewpoints. Everyone, Facebook included, wants to find a way out of the mess generated by every voice having a publishing platform. But what if there is no way out of it?

We want to hear what you think. Submit a letter to the editor or write to letters@theatlantic.com.

[ad_2]

READ SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here