Zuckerberg Is Wrong: Don’t Regulate Our Content

Last Sunday, Mark Zuckerberg made another Facebook strategy post. (This is his second major policy post in as many months. I responded to his March 6 missive as well.) Unsurprisingly, it was a disaster.

I want to shake him by his lapels and say, “Mark! Mark! Wrong way! Stop going that way! We don’t want more snooping and regulation by giant, superpowerful organizations like yours and the U.S. government! We want less!

He says he has spent two years focused on “issues like harmful content, elections integrity and privacy.” If these have been the focuses of someone who is making motions to regulate the Internet, it’s a good idea to stop and think a bit about each one. They are a mixed bag, at best.

1. Zuckerberg’s concerns

Concern #1: “Harmful content”

Zuckerberg’s glib gloss on “harmful content” is “terrorist propaganda, hate speech and more.” Applying the modifier “harmful” to “content” is something done mainly by media regulators, giant corporations like Facebook, and the social justice left. Those of us who still care about free speech—and I think that’s most of us—find the phrase not a little chilling.

Let’s be reasonable, though. Sure, on the one hand, we can agree that groups using social media to organize dangerously violent terrorism, or child pornography, or other literally harmful and illegal activity, for example, should be shut down. And few people would have an issue with Facebook removing “hate speech” in the sense of the KKK, Stormfront, and other openly and viciously racist outfits. That sort of thing was routinely ousted from more polite areas of the Internet long ago, and relegated to the backwaters. That’s OK with me. Reasonable and intellectually tolerant moderation is nothing new.

On the other hand, while all of that can perhaps be called “harmful content,” the problem is how vague the phrase is. How far beyond such categories of more uncontroversially “harmful” content might it extend? It does a tiny bit of harm if someone tells a small lie; is that “harmful content”? Who knows? What if someone shares a conservative meme? That’s sure to seem harmful to a large minority of the population. Is that a target? Why not progressive memes, then? Tech thought leaders like Kara Swisher would ban Ben Shapiro from YouTube, if she could; no doubt she finds Shapiro deeply harmful. Is he fair game? How about “hateful” atheist criticisms of Christianity—surely that’s OK? But how about similarly “hateful” atheist criticisms of Islam? Is the one, but not the other, “harmful content”?

This isn’t just a throwaway rhetorical point. It’s deeply important to think about and get right, if we’re going to use such loaded phrases as “harmful content” seriously, unironically, and especially if there is policymaking involved.

The problem is that the sorts of people who use phrases like “harmful content” constantly dodge these important questions. We can’t trust them. We don’t know how far they would go, if given a chance. Indeed, anyone with much experience debating can recognize instantly that the reason someone would use this sort of squishy phraseology is precisely because it is vague. Its vagueness enables the motte-and-bailey strategy: there’s an easily-defended “motte” (tower keep) of literally harmful, illegal speech, on the one hand, but the partisans using this strategy really want to do their fighting in the “bailey” (courtyard) which is riskier but offers potential gains. Calling them both “harmful content” enables them to dishonestly advance repressive policies under a false cover.

“Hate speech” functions in a similar way. Here the motte is appallingly, strongly, openly bigoted speech, which virtually everyone would agree is awful. But we’ve heard more and more about hate speech in recent years because of the speech in the bailey that is under attack: traditional conservative and libertarian positions and speakers that enfuriate progressives. Radicals call them “racists” and their speech “hate speech,” but without any substantiation.

It immediately raises a red flag when one of the most powerful men in the world blithely uses such phraseology without so much as a nod to its vagueness. Indeed, it is unacceptably vague.

Concern #2: Elections integrity

The reason we are supposed to be concerned about “elections integrity,” as one has heard ad nauseam from mainstream media sources in the last couple years, is that Russia caused Trump to be elected by manipulating social media. This always struck me as being a bizarre claim. It is a widely-accepted fact that some Russians thought it was a good use of a few million dollars to inject even more noise (not all of it in Trump’s favor) into the 2016 election by starting political groups and spreading political memes. I never found this particularly alarming, because I know how the Internet works: everybody is trying to persuade everybody, and a few million dollars from cash-strapped Russians is really obviously no more than shouting in the wind. What is the serious, fair-minded case that it even could have had any effect on the election? Are they so diabolically effective at propaganda to influence elections that, with a small budget, they can actually throw it one way or another? And if so, don’t you think that people with similar magically effective knowhow would be on the payroll of the two most powerful political parties in the world?

Concern #3: Privacy

As to privacy—one of my hobby horses of late—Zuckerberg’s concern is mainly one of self-preservation. After all, this is the guy who admitted that he called you and me, who trusted him with so much of our personal information, “dumb f–ks” for doing so. This is a guy who has built his business by selling your privacy to the highest bidder, without proposing any new business model. (Maybe they can make enough through kickbacks from the NSA, which must appreciate how Facebook acts as an unencrypted mass surveillance arm.)

Mark Zuckerberg has absolutely no credibility on this issue, even when describing his company’s own plans.

He came out last month with what he doubtless wanted to appear to be a “come-to-Jesus moment” about privacy, saying that Facebook will develop the ultimate privacy app: secret, secured private chatting! Oh, joy! Just what I was missing (um?) and always wanted! But even that little bit (which is a very little bit) was too much to hope for: he said that maybe Facebook wouldn’t allow total, strong, end-to-end encryption, because that would mean they couldn’t “work with law enforcement.”

The fact, as we’ll see, that he wants the government to set privacy rules means that he still doesn’t care about your privacy, for all his protestations.

Zuckerberg’s declared motives are dodgy-to-laughable. But given his recommendation—that the government start systematically regulating the Internet—you shouldn’t have expected anything different.

2. Mark Zuckerberg wants the government to censor you, so he doesn’t have to.

Zuckerberg wants to regulate the Internet

In his previous missive, Zuckerberg gave some lame, half-hearted ideas about what Facebook itself would do to shore up Facebook’s poor reputation for information privacy and security. Not so this time. This time, he wants government to take action: “I believe we need a more active role for governments and regulators.” But remember, American law strives for fairness, so these wouldn’t be special regulations just for Facebook. They would be regulations for the entire Internet.

“From what I’ve learned,” Zuckerberg declares, “I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability.”

When Zuckerberg calls for regulation of the Internet, he doesn’t discuss hardware—servers and routers and fiber-optic cables, etc. He means content on the Internet. When it comes to “harmful content and election integrity,” he clearly means some harmful and spurious content that has appeared on, e.g., Facebook. When he talks about “privacy and data portability,” he means the privacy and portability of your content.

So let’s not mince words: to regulate the Internet in these four areas is tantamount to regulating content, i.e., expression of ideas. That suggests, of course, that we should be on our guard against First Amendment violations. It is one thing for Facebook to remove (just for example) videos from conservative commentators like black female Trump supporters Diamond and Silk, which Facebook moderators called “unsafe.” It’s quite another thing for the federal government to do such a thing.

Zuckerberg wants actual government censorship

Now, before you accuse me of misrepresenting Zuckerberg, look at what his article says. It says, “I believe we need a more active role for governments and regulators,” and in “four areas” in particular. The first-listed area is “harmful content.” So Zuckerberg isn’t saying, here, that it is Facebook that needs to shore up its defenses against harmful content. Rather, he is saying, here, that governments and regulators need to take action on harmful content. “That means deciding what counts as terrorist propaganda, hate speech and more.” And more.

He even brags that Facebook is “working with governments, including French officials, on ensuring the effectiveness of content review systems.” Oh, no doubt government officials will be only too happy to “ensure” that “content review systems” are “effective.”

Now, in the United States, terrorist propaganda is already arguably against the law, although some regret that free speech concerns are keeping us from going far enough. Even there, we are right to move slowly and carefully, because a too-broad definition of “terrorist propaganda” might well put principled, honest, and nonviolent left- and right-wing opinionizing in the crosshairs of politically-motivated prosecutors.

But “deciding what counts as…hate speech” is a matter for U.S. law? Perhaps Zuckerberg should have finished his degree at Harvard, because he seems not to have learned that hate speech is unregulated under U.S. law, because of a little thing called the First Amendment to the U.S. Constitution. As recently as 2017, the Supreme Court unanimously struck down a “disparagement clause” in patent law which had said that trademarks may not “disparage…or bring…into contemp[t] or disrepute” any “persons, living or dead.” This is widely regarded as demonstrating that there is no hate speech exception to the First Amendment. As the opinion says,

Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express “the thought that we hate.” 

The trouble with the phrase “hate speech” lies in both the ambiguity and the vagueness of the word “hate” itself. “Hate speech” in its core sense (this is the motte) is speech that is motivated by the speaker’s own bigoted hatred, but in an ancillary sense (this is the bailey), it means speech that we hate, because in our possibly incorrect opinion we think it is motivated by bigotry (but maybe it isn’t). The phrase “hate speech” is also vague and useless because hate comes in degrees, with shifting objects. If I am irritated by Albanians and very mildly diss them, am I guilty of hate speech? Maybe. Jews? Almost certainly. What about white male southerners? Well, what’s the answer there? And what if I really strongly hate a group that it is popular to hate, e.g., rapists?

There’s much more to be said about this phrase, but here’s the point. If government and regulators took Zuckerberg’s call for hate speech legislation to heart, what rules would they use? Wouldn’t they, quite naturally, shift according to political and religious sentiments? Wouldn’t such regulations become a dangerous political football? Would there be any way to ensure it applies fairly across groups—bearing in mind that there is also a Fourteenth Amendment that legally requires such fairness? Surely we don’t want the U.S. legal system subject to the same sort of spectacle that besets Canada and the U.K., in which people are prosecuted for criticizing some groups, while very similar criticism of other, unprotected groups goes unpunished?

But precisely that is, presumably, what Zuckerberg wants to happen. He doesn’t want to be responsible for shutting down the likes of Diamond and Silk, or Ben Shapiro. That, he has discovered, is an extremely unpopular move; but he’s deeply concerned about hate speech; so he would much rather the government do it.

If you want to say I’m not being fair to Zuckerberg or to those who want hate speech laws in the U.S., that of course you wouldn’t dream of shutting down mainstream conservatives like this, I point you back to the motte and bailey. We, staunch defenders of free speech, can’t trust you. We know about motte and bailey tactics. We know that, if not you, then plenty of your left-wing allies in government and media—who knows, maybe Kara Swisher—would advocate for government shutting down Ben Shapiro. That would be a win. The strategy is clear: find the edgiest thing he has said, label it “hate speech,” and use it to argue that he poses a danger to others on the platform, so he should be deplatformed. Or just make an example of a few others like him. That might be enough for the much-desired chilling effect.

Even if you were to come out with an admirably clear and limited definition of “hate speech,” which does not include mainstream conservatives and which would include some “hateful,” extreme left-wing speech, that wouldn’t help much. If the government adopted such “reasonable” regulations, it would be cold comfort. Once the cow has left the barn, once any hate speech law is passed, it’s all too easy for someone to make subtle redefinitions of key terms to allow for viewpoint censorship. Then it’s only a matter of time.

It’s sad that it has come to this—that one of the most powerful Americans in the world suggests that we use the awesome power of law and government to regulate speech, to shut down “hate speech,” a fundamentally obscure weasel word that can, ultimately, be used to shut down any speech we dislike—which after all is why the word is used. It’s sad not only that this is what he has suggested, but that I have to point it out, and that it seems transgressive to, well, defend free speech. But very well then, I’ll be transgressive; I’d say that those who agree with me now have an obligation to be transgressive in just this way.

We can only hope that, with Facebook executives heading for the exits and Facebook widely criticized, Zuckerberg’s entirely wrongheaded call for (more) censorship will be ignored by federal and state governments. Don’t count on it, though.

But maybe, censorship should be privatized

Facebook is also, Zuckerberg says, “creating an independent body so people can appeal our decisions.” This is probably a legal ploy to avoid taking responsibility for censorship decisions, which would make it possible to regulate Facebook as a publisher, not just a platform. Of course, if the DMCA were replaced by some new regulatory framework, then Facebook might not have to give up control, because under the new framework, viewpoint censorship might not make them into publishers.

Of course, whether in the hands of a super-powerful central committee such as Zuckerberg is building, a giant corporation, or the government, we can expect censorship decisions to be highly politicized, to create an elite of censors and rank-and-file thought police to keep us plebs in line. Just imagine if all of the many conservative pages and individuals temporarily blocked or permanently banned by Facebook had to satisfy some third party tribunal.

One idea is for third-party bodies [i.e., not just one for Facebook] to set standards governing the distribution of harmful content and measure companies against those standards. Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum.

Facebook already publishes transparency reports on how effectively we’re removing harmful content. I believe every major Internet service should do this quarterly, because it’s just as important as financial reporting. Once we understand the prevalence of harmful content, we can see which companies are improving and where we should set the baselines.

There’s a word for such “third-party bodies”: censors.

The wording is stunning. He’s concerned about “the distribution” of content and wants judged “measured” against some “standards.” He wants content he disapproves of not just blocked, but kept to a “bare minimum.” He wants to be “effective” in “removing harmful content.” He really wants to “understand the prevalence of harmful content.”

This is not the language that someone who genuinely cares about “the freedom for people to express themselves” would use.

3. The rest of the document

I’m going to cover the rest of the document much more briefly, because it’s less important.

Zuckerberg favors regulations to create “common standards for verifying political actors,” i.e., if you want to engage in political activity, you’ll have to register with Facebook. This is all very vague, though. What behavior, exactly, is going to be caught in the net that’s being weaved here? Zuckerberg worries that “divisive political issues” are the target of “attempted interference.” Well, yes—well spotted there, political issues sure can be divisive! But it isn’t their divisiveness that Facebook or other platforms should try to regulate; it is the “interference” by foreign government actors. What that means precisely, I really wonder.

Zuckerberg’s third point is that we need a “globally harmonized framework” for “effective privacy and data protection.” Well, that’s music to my ears. But it’s certainly rich, the very notion that the world’s biggest violator of privacy, indeed the guy whose violations are perhaps the single biggest cause of widespread concern about privacy, wants privacy rights protected.

He wants privacy rights protected the way he wants free speech protected. I wouldn’t believe him.

Zuckerberg’s final point is another that you might think would make me happy: “regulation should guarantee the principle of data portability.”

Well. No. Code should guarantee data portability. Regulation shouldn’t guarantee any such thing. I don’t trust governments, in the pockets of “experts” in the pay of giant corporations, to settle the rules according to which data is “portable.” They might, just for instance, write the rules in such a way that gives governments a back door into what should be entirely private data.

Beware social media giants bearing gifts.

And portability, while nice, is not the point. Of course Zuckerberg is OK with the portability of data, i.e., allowing people to more easily move it from one vendor to another. But that’s a technical detail of convenience. What matters, rather, is whether I own my data and serve it myself to my subscribers, according to rules that I and they mutually agree on.

But that is something that Zuckerberg specifically can’t agree to, because he’s already told you that he wants “hate speech and more” to be regulated. By the government or by third party censors.

You can’t have it both ways, Zuckerberg. Which is it going to be: data ownership that protects unfettered free speech, or censorship that ultimately forbids data ownership?


by

Posted

in

, , , ,

Comments

Please do dive in (politely). I want your reactions!

Leave a Reply

Your email address will not be published. Required fields are marked *