Zuckerberg Is Wrong: Don't Regulate Our Content

Last Sunday, Mark Zuckerberg made another Facebook strategy post. (This is his second major policy post in as many months. I responded to his March 6 missive as well.) Unsurprisingly, it was a disaster.

I want to shake him by his lapels and say, "Mark! Mark! Wrong way! Stop going that way! We don't want more snooping and regulation by giant, superpowerful organizations like yours and the U.S. government! We want less!"

He says he has spent two years focused on "issues like harmful content, elections integrity and privacy." If these have been the focuses of someone who is making motions to regulate the Internet, it's a good idea to stop and think a bit about each one. They are a mixed bag, at best.

1. Zuckerberg's concerns

Concern #1: "Harmful content"

Zuckerberg's glib gloss on "harmful content" is "terrorist propaganda, hate speech and more." Applying the modifier "harmful" to "content" is something done mainly by media regulators, giant corporations like Facebook, and the social justice left. Those of us who still care about free speech—and I think that's most of us—find the phrase not a little chilling.

Let's be reasonable, though. Sure, on the one hand, we can agree that groups using social media to organize dangerously violent terrorism, or child pornography, or other literally harmful and illegal activity, for example, should be shut down. And few people would have an issue with Facebook removing "hate speech" in the sense of the KKK, Stormfront, and other openly and viciously racist outfits. That sort of thing was routinely ousted from more polite areas of the Internet long ago, and relegated to the backwaters. That's OK with me. Reasonable and intellectually tolerant moderation is nothing new.

On the other hand, while all of that can perhaps be called "harmful content," the problem is how vague the phrase is. How far beyond such categories of more uncontroversially "harmful" content might it extend? It does a tiny bit of harm if someone tells a small lie; is that "harmful content"? Who knows? What if someone shares a conservative meme? That's sure to seem harmful to a large minority of the population. Is that a target? Why not progressive memes, then? Tech thought leaders like Kara Swisher would ban Ben Shapiro from YouTube, if she could; no doubt she finds Shapiro deeply harmful. Is he fair game? How about "hateful" atheist criticisms of Christianity—surely that's OK? But how about similarly "hateful" atheist criticisms of Islam? Is the one, but not the other, "harmful content"?

This isn't just a throwaway rhetorical point. It's deeply important to think about and get right, if we're going to use such loaded phrases as "harmful content" seriously, unironically, and especially if there is policymaking involved.

The problem is that the sorts of people who use phrases like "harmful content" constantly dodge these important questions. We can't trust them. We don't know how far they would go, if given a chance. Indeed, anyone with much experience debating can recognize instantly that the reason someone would use this sort of squishy phraseology is precisely because it is vague. Its vagueness enables the motte-and-bailey strategy: there's an easily-defended "motte" (tower keep) of literally harmful, illegal speech, on the one hand, but the partisans using this strategy really want to do their fighting in the "bailey" (courtyard) which is riskier but offers potential gains. Calling them both "harmful content" enables them to dishonestly advance repressive policies under a false cover.

"Hate speech" functions in a similar way. Here the motte is appallingly, strongly, openly bigoted speech, which virtually everyone would agree is awful. But we've heard more and more about hate speech in recent years because of the speech in the bailey that is under attack: traditional conservative and libertarian positions and speakers that enfuriate progressives. Radicals call them "racists" and their speech "hate speech," but without any substantiation.

It immediately raises a red flag when one of the most powerful men in the world blithely uses such phraseology without so much as a nod to its vagueness. Indeed, it is unacceptably vague.

Concern #2: Elections integrity

The reason we are supposed to be concerned about "elections integrity," as one has heard ad nauseam from mainstream media sources in the last couple years, is that Russia caused Trump to be elected by manipulating social media. This always struck me as being a bizarre claim. It is a widely-accepted fact that some Russians thought it was a good use of a few million dollars to inject even more noise (not all of it in Trump's favor) into the 2016 election by starting political groups and spreading political memes. I never found this particularly alarming, because I know how the Internet works: everybody is trying to persuade everybody, and a few million dollars from cash-strapped Russians is really obviously no more than shouting in the wind. What is the serious, fair-minded case that it even could have had any effect on the election? Are they so diabolically effective at propaganda to influence elections that, with a small budget, they can actually throw it one way or another? And if so, don't you think that people with similar magically effective knowhow would be on the payroll of the two most powerful political parties in the world?

Concern #3: Privacy

As to privacy—one of my hobby horses of late—Zuckerberg's concern is mainly one of self-preservation. After all, this is the guy who admitted that he called you and me, who trusted him with so much of our personal information, "dumb f--ks" for doing so. This is a guy who has built his business by selling your privacy to the highest bidder, without proposing any new business model. (Maybe they can make enough through kickbacks from the NSA, which must appreciate how Facebook acts as an unencrypted mass surveillance arm.)

Mark Zuckerberg has absolutely no credibility on this issue, even when describing his company's own plans.

He came out last month with what he doubtless wanted to appear to be a "come-to-Jesus moment" about privacy, saying that Facebook will develop the ultimate privacy app: secret, secured private chatting! Oh, joy! Just what I was missing (um?) and always wanted! But even that little bit (which is a very little bit) was too much to hope for: he said that maybe Facebook wouldn't allow total, strong, end-to-end encryption, because that would mean they couldn't "work with law enforcement."

The fact, as we'll see, that he wants the government to set privacy rules means that he still doesn't care about your privacy, for all his protestations.

Zuckerberg's declared motives are dodgy-to-laughable. But given his recommendation—that the government start systematically regulating the Internet—you shouldn't have expected anything different.

2. Mark Zuckerberg wants the government to censor you, so he doesn't have to.

Zuckerberg wants to regulate the Internet

In his previous missive, Zuckerberg gave some lame, half-hearted ideas about what Facebook itself would do to shore up Facebook's poor reputation for information privacy and security. Not so this time. This time, he wants government to take action: "I believe we need a more active role for governments and regulators." But remember, American law strives for fairness, so these wouldn't be special regulations just for Facebook. They would be regulations for the entire Internet.

"From what I've learned," Zuckerberg declares, "I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability."

When Zuckerberg calls for regulation of the Internet, he doesn't discuss hardware—servers and routers and fiber-optic cables, etc. He means content on the Internet. When it comes to "harmful content and election integrity," he clearly means some harmful and spurious content that has appeared on, e.g., Facebook. When he talks about "privacy and data portability," he means the privacy and portability of your content.

So let's not mince words: to regulate the Internet in these four areas is tantamount to regulating content, i.e., expression of ideas. That suggests, of course, that we should be on our guard against First Amendment violations. It is one thing for Facebook to remove (just for example) videos from conservative commentators like black female Trump supporters Diamond and Silk, which Facebook moderators called "unsafe." It's quite another thing for the federal government to do such a thing.

Zuckerberg wants actual government censorship

Now, before you accuse me of misrepresenting Zuckerberg, look at what his article says. It says, "I believe we need a more active role for governments and regulators," and in "four areas" in particular. The first-listed area is "harmful content." So Zuckerberg isn't saying, here, that it is Facebook that needs to shore up its defenses against harmful content. Rather, he is saying, here, that governments and regulators need to take action on harmful content. "That means deciding what counts as terrorist propaganda, hate speech and more." And more.

He even brags that Facebook is "working with governments, including French officials, on ensuring the effectiveness of content review systems." Oh, no doubt government officials will be only too happy to "ensure" that "content review systems" are "effective."

Now, in the United States, terrorist propaganda is already arguably against the law, although some regret that free speech concerns are keeping us from going far enough. Even there, we are right to move slowly and carefully, because a too-broad definition of "terrorist propaganda" might well put principled, honest, and nonviolent left- and right-wing opinionizing in the crosshairs of politically-motivated prosecutors.

But "deciding what counts as...hate speech" is a matter for U.S. law? Perhaps Zuckerberg should have finished his degree at Harvard, because he seems not to have learned that hate speech is unregulated under U.S. law, because of a little thing called the First Amendment to the U.S. Constitution. As recently as 2017, the Supreme Court unanimously struck down a "disparagement clause" in patent law which had said that trademarks may not "disparage...or bring...into contemp[t] or disrepute" any "persons, living or dead." This is widely regarded as demonstrating that there is no hate speech exception to the First Amendment. As the opinion says,

Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express “the thought that we hate.” 

The trouble with the phrase "hate speech" lies in both the ambiguity and the vagueness of the word "hate" itself. "Hate speech" in its core sense (this is the motte) is speech that is motivated by the speaker's own bigoted hatred, but in an ancillary sense (this is the bailey), it means speech that we hate, because in our possibly incorrect opinion we think it is motivated by bigotry (but maybe it isn't). The phrase "hate speech" is also vague and useless because hate comes in degrees, with shifting objects. If I am irritated by Albanians and very mildly diss them, am I guilty of hate speech? Maybe. Jews? Almost certainly. What about white male southerners? Well, what's the answer there? And what if I really strongly hate a group that it is popular to hate, e.g., rapists?

There's much more to be said about this phrase, but here's the point. If government and regulators took Zuckerberg's call for hate speech legislation to heart, what rules would they use? Wouldn't they, quite naturally, shift according to political and religious sentiments? Wouldn't such regulations become a dangerous political football? Would there be any way to ensure it applies fairly across groups—bearing in mind that there is also a Fourteenth Amendment that legally requires such fairness? Surely we don't want the U.S. legal system subject to the same sort of spectacle that besets Canada and the U.K., in which people are prosecuted for criticizing some groups, while very similar criticism of other, unprotected groups goes unpunished?

But precisely that is, presumably, what Zuckerberg wants to happen. He doesn't want to be responsible for shutting down the likes of Diamond and Silk, or Ben Shapiro. That, he has discovered, is an extremely unpopular move; but he's deeply concerned about hate speech; so he would much rather the government do it.

If you want to say I'm not being fair to Zuckerberg or to those who want hate speech laws in the U.S., that of course you wouldn't dream of shutting down mainstream conservatives like this, I point you back to the motte and bailey. We, staunch defenders of free speech, can't trust you. We know about motte and bailey tactics. We know that, if not you, then plenty of your left-wing allies in government and media—who knows, maybe Kara Swisher—would advocate for government shutting down Ben Shapiro. That would be a win. The strategy is clear: find the edgiest thing he has said, label it "hate speech," and use it to argue that he poses a danger to others on the platform, so he should be deplatformed. Or just make an example of a few others like him. That might be enough for the much-desired chilling effect.

Even if you were to come out with an admirably clear and limited definition of "hate speech," which does not include mainstream conservatives and which would include some "hateful," extreme left-wing speech, that wouldn't help much. If the government adopted such "reasonable" regulations, it would be cold comfort. Once the cow has left the barn, once any hate speech law is passed, it's all too easy for someone to make subtle redefinitions of key terms to allow for viewpoint censorship. Then it's only a matter of time.

It's sad that it has come to this—that one of the most powerful Americans in the world suggests that we use the awesome power of law and government to regulate speech, to shut down "hate speech," a fundamentally obscure weasel word that can, ultimately, be used to shut down any speech we dislike—which after all is why the word is used. It's sad not only that this is what he has suggested, but that I have to point it out, and that it seems transgressive to, well, defend free speech. But very well then, I'll be transgressive; I'd say that those who agree with me now have an obligation to be transgressive in just this way.

We can only hope that, with Facebook executives heading for the exits and Facebook widely criticized, Zuckerberg's entirely wrongheaded call for (more) censorship will be ignored by federal and state governments. Don't count on it, though.

But maybe, censorship should be privatized

Facebook is also, Zuckerberg says, "creating an independent body so people can appeal our decisions." This is probably a legal ploy to avoid taking responsibility for censorship decisions, which would make it possible to regulate Facebook as a publisher, not just a platform. Of course, if the DMCA were replaced by some new regulatory framework, then Facebook might not have to give up control, because under the new framework, viewpoint censorship might not make them into publishers.

Of course, whether in the hands of a super-powerful central committee such as Zuckerberg is building, a giant corporation, or the government, we can expect censorship decisions to be highly politicized, to create an elite of censors and rank-and-file thought police to keep us plebs in line. Just imagine if all of the many conservative pages and individuals temporarily blocked or permanently banned by Facebook had to satisfy some third party tribunal.

One idea is for third-party bodies [i.e., not just one for Facebook] to set standards governing the distribution of harmful content and measure companies against those standards. Regulation could set baselines for what's prohibited and require companies to build systems for keeping harmful content to a bare minimum.

Facebook already publishes transparency reports on how effectively we're removing harmful content. I believe every major Internet service should do this quarterly, because it's just as important as financial reporting. Once we understand the prevalence of harmful content, we can see which companies are improving and where we should set the baselines.

There's a word for such "third-party bodies": censors.

The wording is stunning. He's concerned about "the distribution" of content and wants judged "measured" against some "standards." He wants content he disapproves of not just blocked, but kept to a "bare minimum." He wants to be "effective" in "removing harmful content." He really wants to "understand the prevalence of harmful content."

This is not the language that someone who genuinely cares about "the freedom for people to express themselves" would use.

3. The rest of the document

I'm going to cover the rest of the document much more briefly, because it's less important.

Zuckerberg favors regulations to create "common standards for verifying political actors," i.e., if you want to engage in political activity, you'll have to register with Facebook. This is all very vague, though. What behavior, exactly, is going to be caught in the net that's being weaved here? Zuckerberg worries that "divisive political issues" are the target of "attempted interference." Well, yes—well spotted there, political issues sure can be divisive! But it isn't their divisiveness that Facebook or other platforms should try to regulate; it is the "interference" by foreign government actors. What that means precisely, I really wonder.

Zuckerberg's third point is that we need a "globally harmonized framework" for "effective privacy and data protection." Well, that's music to my ears. But it's certainly rich, the very notion that the world's biggest violator of privacy, indeed the guy whose violations are perhaps the single biggest cause of widespread concern about privacy, wants privacy rights protected.

He wants privacy rights protected the way he wants free speech protected. I wouldn't believe him.

Zuckerberg's final point is another that you might think would make me happy: "regulation should guarantee the principle of data portability."

Well. No. Code should guarantee data portability. Regulation shouldn't guarantee any such thing. I don't trust governments, in the pockets of "experts" in the pay of giant corporations, to settle the rules according to which data is "portable." They might, just for instance, write the rules in such a way that gives governments a back door into what should be entirely private data.

Beware social media giants bearing gifts.

And portability, while nice, is not the point. Of course Zuckerberg is OK with the portability of data, i.e., allowing people to more easily move it from one vendor to another. But that's a technical detail of convenience. What matters, rather, is whether I own my data and serve it myself to my subscribers, according to rules that I and they mutually agree on.

But that is something that Zuckerberg specifically can't agree to, because he's already told you that he wants "hate speech and more" to be regulated. By the government or by third party censors.

You can't have it both ways, Zuckerberg. Which is it going to be: data ownership that protects unfettered free speech, or censorship that ultimately forbids data ownership?


Is Western civilization collapsing?

A perennial topic for me (and many of us) is the notion that there is a deep malaise in Western civilization. There are, it seems to me, three main camps on the question, "Is Western civilization collapsing?"

1. The conservative position. "Yes. And it's a horrible thing. For one thing, elites have basically stopped reproducing. They're inviting people from foreign cultures into their countries, and they're reproducing faster than their elites. The result will be an inevitable cultural replacement after a few generations, although probably not before we go through a period of bloody civil wars. And Western traditions are not being passed down. We are becoming less Christian every year. Our universities are teaching less and less of the classics of Western civilization. Though they spend longer in school, our graduates are more ignorant of their cultural roots. We have no desire to create beauty any longer. We have nothing, really, to live for. Our heart is simply not in it any longer; we're in the death throes of this civilization."

2. The postmodern position. "Are you really even asking this question? So you think Western civilization is 'collapsing'? Well, maybe it is. If so, good! But if we're going to be honest with ourselves, we should recognize that there is much about Western civilization that deserves to die, and the sooner the better. What will replace it? Who knows? Who cares? But you must be a racist Islamophobe if you think it will be Islamic. But probably, you're just an idiot because there is no reason to think Western civilization is 'collapsing.' It might be, however, transforming, and into something better, something more tolerant, open, and multi-cultural."

3. The optimistic position. "Oh, not this again. Haven't you read Steven Pinker's Enlightenment Now? Look, almost all the metrics look better than they've ever been. People always think we're on the brink of disaster even when things are awesome. The world is better educated than it's ever been. People in third world countries are moving into the modern world. Look at the Internet! Look at technology! Look at all the entrepreneurship and discovery that is happening every day! How on earth can you fail to recognize that, far from being in our death throes, we are ramping up a new global civilization with, perhaps, some new values, but which enjoys radically transformative changes for the better every year."


Here are a few notes to put these into perspective. The conservative position is a position about the health of traditional Western values and culture. It takes the view that these values and culture should be preserved, that they aren't being preserved, and that Westerners therefore are living increasingly meaningless lives.

The postmodern position is a primarily a reaction to the conservative position. It denies that there is a problem worth solving because Western values and culture are better off dead and buried.

The optimistic position certainly appears to be about another topic altogether, i.e., not about the health of traditional Western values and culture, although it pretends to be responding to conservative worry. It equates "civilization" not so much with Western traditions and values, precisely, as with the sort of globalist system of capitalist economies and the largely Western-derived education and culture that has sprouted and flowered in the 20th and especially the 21st centuries. You can see it in most of the big cities of the world. The success of this civilization is not to be evaluated (on this view) by some subjective measures of morality, or religion of course, or using sociological metrics that go proxy for these, but instead by more objective measures of well-being such as GDP, literacy rates, and longevity rates.


These positions interact in interesting ways.

  • A very strong case can be made that it is precisely certain Western traditions (democracy, industrialism, free enterprise, science, etc.) that have enabled the global success celebrated by the optimistic position.
  • The postmodern position is, too, absolutely rooted in some Western values (such as cultural tolerance and Christian charity).
  • And the optimistic position is widely (and in my opinion rightly) regarded as too optimistic; almost all of us detect some manner of deep moral malaise in Western civilization (such as dangerous populist racism, on the one hand, or the dangerous weakening of Christian values, on the other), even if we don't necessarily think of it as threatening civilization itself, and the happy talk does not do this justice.
  • And the postmodern position is surely right to suggest that Western civilization has undergone and is likely to continue to undergo radical transformations that have made the Western roots of American and European societies look positively foreign. But does that mean the collapse of civilization, or its transformation?
  • And if it is transforming and not collapsing, is that unequivocally a good thing?
  • Are important values, that conservatives perhaps talk about more than progressives, being lost? Put aside your political differences and ask yourself: might that be important? And what consequences might that have for the new global order?
  • Is it true that there must be some transcendent purpose and deep values that undergird our lives, or else (as conservatives suggest) civilization, that will cause not merely its transformation but its wholesale replacement with some other civilization that does celebrate some transcendent purpose? And if that's true, what values would replace Western ones?
  • Could something like progressivism itself constitute a global value system?
  • We already know that any such progressive value system largely conflict with traditional Christianity and some other Western values, but doesn't it also conflict with Islam?

I don't suggest any conclusion now. I just thought that contextualizing the debate would be interesting.


How I replaced Dropbox

Updated April 2 at bottom.

My main beef with Dropbox is that it's not secure, not adequately encrypted, and there's been a little too much indication that Dropbox is spying on user data.

Ever since I decided to lock down my cyber-life, I had Dropbox in my sights. It was going to be a pain to replace it, I thought, so it took a while before I got around to doing so. I finally did do so today.

The longest step of this process was deciding what I wanted to do. At first, I thought I'd set up my own lightweight cloud server using my desktop, which would sync files on all my devices, something like NextCloud. A great bonus is that this makes it particularly easy to sync things like your address book and passwords. This doesn't seem like a bad idea and is now my fallback. But I ultimately decided to pass because (a) setup might end up being very bothersome, (b) it might eat up desktop resources, and (c) I'd have to keep my computer on all the time, which seems suboptimal.

All of the problems with installing my own NextCloud—bothersome setup, resources constraints, and always-on system—are taken care of by getting my own server or, less ambitiously, what is called a NAS, or Network-Attached Storage system. I spent several hours yesterday researching all about NASes, and came close to getting either a QNAP or a Synology NAS, because they're so frickin' cool. I mean, jeez, it's actually a fully-functioning standalone web server with a zillion apps (especially Synology), and sure, you can use it to sync your files. But the more I thought about it, the more I thought, "This is a lot of work (and yet another giant attack surface for hackers), when all I really want is a Dropbox replacement." If I were just hacking and exploring, I would have gotten a NAS in a heartbeat, they're so cool. But I have other things to do, so...

I also semi-seriously considered getting a zero-knowledge encryption system, like SpiderOak. The premise seems solid: your files are all saved in the cloud, but 100% encrypted, and the key needed to decrypt them is only on your machine (or in your head). SpiderOak (and many other similar services) cannot scan your files because it lacks the keys to read them. I guess my experience with being hacked and seriously disaffected with storing data in the cloud generally turned me off even to this. If I don't have to trust a company (as I do if, e.g., I want to use a VPN), then I'd prefer not to.

So, how do you get cloud functionality without the cloud? With syncing apps. These use different technologies to sync your devices directly with each other, through the Internet, but not stored on the Internet, and without any one of them acting as a server to the others (so they're all peers of each other in your little device network). It turns out that there are several options available here, and I came close to going with Syncthing because it's open source (and therefore, more trustworthy) but...no iPhone app. But the next best thing is Resilio Sync, which is also based on (the UPDATE: closed-source) Bittorrent Sync. Now, the fact that it uses Tor doesn't mean your data is stored in the dark web. It simply makes use of the Tor network, which is perfectly legal and legit, that is required for accessing the dark web (something I've never even tried to do, by the way). The beauty of the system is that in transit through cyberspace, your data is end-to-end encrypted through a decentralized network. It's hard to get more secure, or that's my understanding.

Resilio Sync is pretty easy to install if you're not using Linux. It was a bit of a pain (they could work harder on the setup, I mean really, guys) but still doable, if like me you're reasonably adept with vague Linux instructions. It didn't take longer than an hour to completely set up and test (my son did it in half the time), and then I started moving folders over, one by one, from Dropbox to my new Sync folder. This was quite satisfying, not unlike that satisfying feeling of changing my account email addresses from gmail.com to sanger.io. And because Resilio updates via your LAN directly from device to device, it syncs much faster than Dropbox. Like Linux, the slightly geekier alternative turns out to be just better, all the way around.

I got the $100 one-time deal so my family could all use it. Since this is roughly what I've been paying to Dropbox yearly for the last decade or whatever it's been, I was very happy to pay this.

How does it work? Well, once it's set up, it's just like Dropbox. Create a new file in your work folder? It's practically instantly synced to any other devices that are on, as soon as you save it. (Of course, it does have to be on, in order to sync. And your phone won't sync the file and folder contents; it will only sync the index, and then, as with the Dropbox mobile app, you can download the item one-by-one.)

There is one very small change this might require to your routine. Since your files aren't in the cloud but only on other machines, before you leave one machine with files on it you might want to access elsewhere, you'll want to make sure either (a) that machine will stay on while you're away from it, or (b) you've synced before you leave while they're in close proximity (the LAN connection will make syncing faster, too).

Love it so far. Buh-bye Dropbox! Any regrets so far? Not really. While LAN syncing for me is significantly faster than Dropbox, it uses only 10% of my available LAN bandwidth, and I wasn't able to get it to go faster; I'm not sure what's up with that. I tried to fix it but didn't dare do too much, since it involved a lot of fiddly changes to settings that, it seems, need to be undone. Your mileage may vary.

Also, they didn't make Linux GUI other than a browser-based one, which is OK; it works well enough. They didn't even bother to create a tray icon, but they do have an API, so my 12-year-old son made one for them and I'm already using it. (Want the code, Resilio? I can set that up.)

Of course, if you haven't taken the Linux plunge, Resilio Sync is probably going to be a lot more usable for you—not that, at the end of the day, it isn't extremely usable for Linux users, too. And, as I've indicated, there are many, many other options available to you if you want to ditch Dropbox. You should consider them for yourself.


April 2 update:

I've been using Resilio Sync for the last two weeks, and my son and I have a few concerns. The first is one we knew about going in: it's not a cloud solution. Syncing works only if both devices are on. This means syncing isn't exactly "set it and forget it." You have to pay attention to whether something is syncing, and if you forget...you won't be synced. After using Dropbox for years, this turns out to be quite annoying.

This, in turn, means I have to worry more about losing files. I can back up files on my main machine, which is always a great idea (of course), but if I haven't synced because two machines haven't been on at the same time (or because I need to reboot Sync, which is also an annoyance), then I might still lose laptop files because I only back up my desktop.

Backing up is all the more important because it is possible to inadvertently delete a bunch of files from one machine...leading them to be deleted everywhere. That would be a disaster. It's like automatically deleting all your backups. Of course, the stuff might be rescuable in Trash, but do you really want to rely on Trash as a fallback solution?

To pour salt in the wound, if I really want peace of mind, I have to make sure the the backup program is fantastic. I can't rely on Resilio Sync as a backup program. And the default Ubuntu backup program kind of sucks (which is surprising to me). This isn't a count against Resilio, but it does make switching, if I'm going to switch, more urgent.

So it's back to the drawing board. A zero-knowledge encryption cloud solution is sounding better now, but there are two sticking points for me: (a) I don't want to have to trust an external vendor if I don't have to, and (b) I'm not confident that I know what's going on well enough to be able to say that my data is truly secure and private.

Last time, I came very close to getting a NAS, but I didn't. I'm now 90% sure I will get a NAS after all.

The reason I didn't get a NAS the first time is that it sounded like just too much trouble to set it up and maintain it, not to mention having another attack surface to lock down. But the more I think about it, the more I think it might be worth it.

After all, another rather huge advantage of a NAS is that I don't have to rely on any cloud service I don't control myself, at least for my personal purposes, for a range of purposes we now use different cloud services for. That means I can maintain my own synced contacts, passwords, bookmarks, etc., as well as supporting collaborative documents (a la Google Docs) I want to work on with others (such as a Declaration of Digital Independence). I might still have to rely on Google Docs (or something like it) for work, but at least my private life would be more locked down.

Any one of the latter advantages certainly wouldn't be enough to justify getting a NAS. But taken together, and combined with an always-on Dropbox alternative that I can "set and forget," it's looking better and better.

Stay tuned. I'm not done yet.

Another installment in my series on how I’m locking down my cyber-life.


How and why I got a VPN

As part of my ongoing efforts to lock down my cyber-life, I finally decided to investigate VPNs (virtual private networks) and subscribe to one, if it seemed to be a good idea.

Well, it is a good idea. So I got one, and it was pretty cheap.

What is a VPN, anyway?

A virtual private network, briefly, is subscription service (there are free ones, but don't use a free one) that you can connect to in order to mask your IP address, pretending (unsuccessfully if you're using a mobile connection) that you're connecting to the Internet from somewhere else, while encrypting the data that passes between you and your ISP (which can mean your data is encryped as it passes through wifi). It doesn't replace your ISP; you still need an ISP to connect to the Internet. More specifically, a VPN (typically, a for-profit company):

  1. Is runs a number of servers (computers), which ideally are located all around the world, each of which connects to the Internet on your behalf.
  2. Is a service you connect to, as a data "tunnel" to the Internet. You can set up your computer or phone so that it connects to the VPN whenever you get online (or whenever you like). All your requests to the Internet, and all the responses you receive from the Internet, are routed through one or another of the VPN's nodes.
  3. Encrypts the data exchanged between its servers and your device.
  4. Typically doesn't log your traffic (but there's no way to know this for sure) or intercept your data (unless they receive a specific court order to do so in your case).
  5. Is typically a paid service; there are free ones.

Why would I want a VPN?

So, what does a VPN do? What is it good for? What are the benefits? Why would you get one? Several things (cf. this useful intro):

  1. Foil the NSA, maybe. You connect to the Internet via your ISP at home, right? Well, since data you exchange with the VPN is encrypted, your ISP can't detect anything about what websites you're looking at or what information you're sending. Since mass surveillance (e.g., by the NSA) is typically done at the ISP level, this foils such surveillance. But maybe you trust all the fine, upstanding people who work for the government and don't care. Well, there are other reasons, as well:
  2. Make it harder for websites, hackers, and advertisers to spot you. When you connect to a website without a VPN, it typically logs the IP address that is accessing it, maybe info about your device, browser, etc. This can be used by the website to track you and for various nefarious purposes. When you connect with a VPN, websites log data from the VPN's server, which says nothing about you. This protects your information privacy and security (which you should care about!).
  3. Use airport, hotel, and restaurant connections securely. If you connect to the Internet via your airport's connection, hackers can pretty easily do nasty things with your data stream. But if your data stream is completely encrypted on its way through the airport's wifi to and from the VPN, those hackers can't touch you. Take that, hackers! This is a huge advantage to me, considering how much traveling I'm doing these days.
  4. See content as if you were elsewhere. If you want to access information that is accessible only by IP addresses from a given country (such as the U.K. or the U.S.), a VPN lets you do so. You can make it look like you're from there! E.g., I can watch Brits-only content from the BBC. That's just kind of cool.
  5. More safely do P2P file sharing. If you must, and are cheap, and refuse to pay the creators of your content, you bastard.

If you don't care about privacy or security or striking a blow against mass surveillance, then you should pass. If you do care about those things, consider getting a VPN.

WThere's one significant disadvantage about VPNs, which makes me sad, but I'll live with it: VPNs do slow down your Internet connection, but not necessarily by much. As you know (if you know how the Internet works at all), Internet traffic bounces from node to node as it makes its way from the website (or whatever) you're accessing to your device. The VPN adds one node to that trip. As long as you connect to a VPN server located near you, this trip isn't actually lengthed by much. BestVPN.com says it slows down your connection speed by 10%, but the actual amount at any given time depends on many factors. I rarely notice much of a difference, for what it's worth.

Update: after using it for a couple days, my VPN (which is reputedly one of the faster ones) doesn't really noticeably slow down my connection, even at the hotel. Except when I was connected to the U.K., and then the only problem was that I had to buffer a video once or twice.

What VPN did I choose?

I'm not telling. I spent some hours doing research. A name emerged. You should do the same and use your own judgment. Be careful not to subscribe to any shady VPNs; they doubtless do exist and it might be hard to figure out whether yours is one. There can be problems with the software as well. Unfortunately, some amount of trust is involved if you're not a specialist. I bore these requirements in mind:

  • Don't just look for claims that they don't keep logs; check that the claims have been verified (by consultants, courts, or police).
  • Bear in mind that many reviews might be paid for and so can't be trusted. It might be hard to tell which reviews these are.
  • Speed.
  • Can one determine who owns the company? Do they look legit?
  • Support for Linux.

There are other features you might be interested in, of course.

How hard was it to buy and install?

I can speak only about the one I bought and installed: it was dead simple. It was no harder to buy than any other subscription service. As for installation, I had it downloaded, installed, and working in maybe two minutes. Of course, that's just the one I bought.

Note, you don't have to install special software to use a VPN, e.g., if you're using an OS or browser that has the software built in.

There's much more to know about VPNs, which you might want to know if you're going to get into it. You're just getting a rank beginner's explanation of why he got one, here.

This is part of the series on how I'm locking down my cyber-life.


A reply to Mark Zuckerberg's "Privacy-Focused Vision for Social Networking"

Yesterday (March 6), Facebook CEO Mark Zuckerberg outlined Facebook's new "vision and principles around building a privacy-focused messaging and social networking platform." The essential problem isn't that they need a new app; rather, they need to reform their existing one.

Rather than acknowledging the elephant in the room—that users are deeply incensed that their privacy continues to be systematically sold by Big Tech, that ongoing security issues stem from Facebook's inherent and business-critical data-collection and -sharing practices—Zuckerberg pretends that it's important that he solves, well, a different problem:

But people increasingly also want to connect privately in the digital equivalent of the living room. As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.

It is as if Zuckerberg had been reminded one too many times by his advisers that people really do care, after all, about this pesky privacy issue, which once upon a time he could say with impunity was no longer a "social norm." Yeah, so maybe he was wrong. Maybe the perennial demands for a right to privacy are not a changeable social norm, after all. Maybe people really do care about their information being controlled by themselves and not by giant corporations and authorities. Yes, Zuck, well spotted. People do care about privacy after all. But he interpreted the general sentiment in the most naive, simple-minded way, and decided that what people were missing were...private chat rooms.

Because people are really upset that they don't have private chat rooms, apparently. But never fear! Zuck is here to save the day! He'll make chat rooms, and he'll make them really, really private! (Well, not really. Not even that, as we'll see.)

Throughout the 3,200-word piece, there is no explicit acknowledgment that there might be a different way to do more open and public social networking. Nothing about standards and protocols. Nothing about interoperability between independent social media networks.

Zuckerberg also shows no awareness of the real reasons we should care about privacy. No, it's not just about people being free to have intimate conversations. There's much more to it than that. It is ultimately about freedom and autonomy. It's a fundamental right. Like free speech, people who don't understand it or who want to control us are only too happy to make it conditional on their ultimately arbitrary and power-driven decisions.


Zuck makes much of WhatsApp's end-to-end encryption. It is certainly true that private messaging services should have end-to-end encryption built in, and that, no, not even Facebook should be able to listen in on our private conversations: "End-to-end encryption prevents anyone—including us—from seeing what people share on our services." Well spotted, indeed! But as we'll see in a bit, he doesn't really mean it. Are you surprised?

Does Zuckerberg propose privacy improvements to Facebook itself, the public and semi-public service that Facebook has used to exploit us, to its enormous profit? No, not really. Perhaps this is an oblique and hopeful-sounding reference: "Over the next few years, we plan to rebuild more of our services around these ideas." Sure. Maybe you will, if you're still around. We'll believe it when we see it. But of course we shouldn't believe any such oblique promises from an arrogant frat boy who deems his users to be "dumb fucks."

Later in the piece, Zuckerberg tips his hat slightly toward those of us who want to decentralize social media: "End-to-end encryption is an important tool in developing a privacy-focused social network. Encryption is decentralizing—it limits services like ours from seeing the content flowing through them and makes it much harder for anyone else to access your information."

But soon after repeating this tantalizing offer of real end-to-end encryption, Zuckerberg takes it away:

At the same time, there are real safety concerns to address before we can implement end-to-end encryption across all of our messaging services. Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion. We have a responsibility to work with law enforcement and to help prevent these wherever we can. We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can't see the content of the messages, and we will continue to invest in this work. But we face an inherent tradeoff because we will never find all of the potential harm we do today when our security systems can see the messages themselves.

People who actually know something about how privacy works and why it's important—you can be one of them, if you read The Art of Invisibility by Kevin Mitnick or Cybersecurity for Beginners by Raef Meeuwisse—will instantly spot a contradiction here. If there is truly end-to-end encryption, then it will be impossible for Facebook "to work with law enforcement and to help prevent these wherever we can." This is why some politicians and governments simply want to outlaw encryption, which would be a giant step toward totalitarianism, and absolutely insane to boot. Maybe we could make this a teachable moment for Zuck: "Look, dude, you can't have it both ways. Either you have end-to-end encryption that the authorities cannot (without superheroic efforts) crack, or you give authorities (and yourselves, and expert hackers) a back door that naturally undermines the real privacy (not to mention security) of your network. You can't have it both ways."

But no—he wants us to believe that we can. And that he believes that we can.

Truly risible.

The hard, cold fact is that, just as whispered conversations conducted far from prying ears and detection technology, in principle, cannot be monitored, so private conversations online, if they are successfully end-to-end encrypted, cannot be monitored...so long as eavesdroppers don't have the private keys, and the private keys are strong enough not to be crackable, and...and...and...

Anyway, Zuckerberg has amply demonstrated that he's learned nothing. Move along, folks—nothing to see here.

The decentralization revolution will proceed as scheduled.


Please use #DecentralizeSocialMedia when you share this post!


How to decentralize social media—a brief sketch

The problem about social media is that it is centralized. Centralization empowers massive corporations and governments to steal our privacy and restrict our speech and autonomy.

What should exist are neutral, technical standards and protocols, like the standards and protocols for blogs, email, and the Web. Indeed, many proposed standards already do exist, but none has emerged as a common, dominant standard. Blockchain technology—the technology of decentralization—is perfect for this, but not strictly necessary. Common protocols would enable us to follow public feeds no matter where they are published. We would eventually have our pick of many different apps to view these feeds. We would choose our own terms, not Facebook's or Twitter's, for both publishing and reading.

As things are, if you want to make short public posts to the greatest number of people, you have to go to Twitter, enriching them and letting them monetize your content (and your privacy). Similarly, if you want to make it easy for friends and family to follow your more personal text and other media, you have to go to Facebook. Similarly for various other kinds of content. It just doesn't have to be that way. We could decentralize.

This is a nice dream. But how do we make it happen?

After all, the problem about replacing the giant, abusive social media companies is that you can't replace existing technology without making something so much more awesome that everyone will rush to try it. And the social media giants have zillions of the best programmers in the world. How can we, the little guys, possibly compete?

Well, I've thought of a way the open source software and blockchain communities might actually kick the legs out from under the social media giants. My proposal (briefly sketched) has five parts. The killer feature, which will bring down the giants, is (4):

  1. The open data standards. Create open data standards and protocols, or probably just adopt the best of already-existing ones, for the feeds of posts (and threads, and other data structures) that Twitter, Facebook, etc., uses. I'm not the first to have thought of this; the W3C has worked on the problem. It'd be like RSS, but for various kinds of social media post types.
  2. The publishing/storage platforms. Create reliable ways for people to publish, store, and encrypt (and keep totally secret, if they want) their posts. Such platforms would allow users to control exactly who has access to what content they want to broadcast to the world, and in what form, and they would not have to ask permission from anyone and would not be censorable. (Blockchain companies using IPFS, and in particular Everipedia, could help here and show the way; but any website could publish feeds.)
  3. The feed readers. Just as the RSS standard spawned lots of "reader" and "aggregator" software, so there should be similar feed readers for the various data standards described in (1) and the publishers described in (2). While publishers might have built-in readers (as the social media giants all do), the publishing and reading feature sets need to be kept independent, if you want a completely decentralized system.
  4. The social media browser plugins. Here's the killer feature. Create at least one (could be many competing) browser plugins that enable you to (a) select feeds and then (b) display them alongside a user's Twitter, Facebook, etc., feeds. (This could be an adaptation of Greasemonkey.) In other words, once this feature were available, you could tell your friends: "I'm not on Twitter. But if you want to see my Tweet-like posts appear in your Twitter feed, then simply install this plugin and input my feed address. You'll see my posts pop up just as if they were on Twitter. But they're not! And we can do this because you can control how any website appears to you from your own browser. It's totally legal and it's actually a really good idea." In this way, while you might never look at Twitter or Facebook, you can stay in contact with your friends who are still there—but on your own terms.
  5. The social media feed exporters/APIs. Create easy-to-use software that enables people to publish their Twitter, Facebook, Mastodon, Diaspora, Gab, Minds, etc., feeds via the open data standards. The big social media companies already have APIs, and some of the smaller companies and open projects have standards, but there is no single, common open data standard that everyone uses. That needs to change. If you could publish your Twitter data in terms of such a standard, that would be awesome. Then you could tell your friends: "I'm on Twitter, but I know you're not. You don't have to miss out on my tweets. Just use a tweet reader of your choice (you know—like an old blog/RSS feed reader, but for tweets) and subscribe to my username!

The one-two punch here is the combination of points (1) and (4): First, we get behind decentralized, common social media standards and protocols, and then we use those standards when building plugins that let our friends, who are still using Facebook and Twitter (etc.), see posts that we put on websites like Steemit, Minds, Gab, and Bitchute (not to mention coming Everipedia Network dapps).

The exciting thing about this plan is that no critical mass seems to be needed in order to get people to install the envisioned plugin. All you need is one friend whose short posts you want to see in your Twitter feed, and you might install a plugin that lets you do that. As more and more people do this, there should be a snowball effect. Thus, even a relatively small amount of adoption should create a movement toward decentralization. And then the days of centralized social media will be numbered. We'll look back on the early days of Facebook and Twitter (and YouTube!) as we now do the Robber Barons.

We can look at a later iteration of Everipedia itself as an example. Right now, there is one centralized encyclopedia: Wikipedia. With the Everipedia Network, there will be a protocol that will enable people from all over the web to participate in a much broader project.

I would love to see the various competitors of the social media giants settle on a common standard and otherwise join forces on these sorts of projects. If they do, it will happen, and the days of privacy-stealing, centralized, controlling, Big Brother social media will soon be behind us. We'll return to the superior and individually empowering spirit of the original Internet.

We have to do this, people. This is the future of the Internet. Even if you've given up social media, we should build this for our friends and family who are still toiling in the digital plantations.


My Facebook #DeletionDay goodbye message

Here's what I posted as my last long message to Facebook.


Folks, as previously announced, tomorrow will be my #DeletionDay for Facebook. It'll be the last day I'll post here, and I'll begin the process for the permanent removal of my account. (Among other things, I'll make a copy of my data and my friends list.) I'm sorry to those who want me to stay, but there are too many reasons to quit.

Let me explain again, more tersely, why I'm quitting.

You probably already know that I think this kind of social media, as fun as it undoubtedly can be, undermines relationships, wastes our time, and distracts us. I also agree, as one guy can be seen saying on virally-shared videos, that social media is particularly bad for kids. All I can say is, it's just sad that all that hasn't been enough for me (and most of us) to quit.

But in 2018, it became all too clear that Big Tech—which is now most definitely a thing—is cynically and strongly committed to using social media as a potent tool of political control, which it certainly is. They like having that power. For companies like Google, Facebook, and Apple, reining in wrongthink is a moral imperative. And they're doing the bidding of the Establishment when they do so. It's very scary, I think.

The only thing that gives them this awesome power over us and our free, voluntary conversations is that we have given them that power. But notice the thing that empowers them: we give them our data to manage. It's not really ours. They take it, sell it to advertisers, repackage it, and show it back to us in ways they control. And they can silence us if they like. That's because we have sold our privacy to them for convenience and fun. We're all what Nick Carr aptly called "digital sharecroppers." I now think it's a terrible deal. It's still voluntary, thank goodness; so I'm opting out.

Another thing is that I started reading a book called Cybersecurity for Beginners (no, I'm not too proud to read a book called that) by Raef Meeuwisse, after my phone (and Google account and Coinbase) were hacked. This finally opened my eyes to the very close connection between privacy and security. Meeuwisse explains that information security has become much more complex than it was in the past, what with multiple logins, multiple (interconnected) devices, multiple (interconnected) cloud services, and in short multiple potential points of failure in multiple layers.

[Adding now: Someone recommended, and I bought and started reading, another good privacy book called The Art of Invisibility by Kevin Mitnick. Mitnick is a famous hacker. Meeuwisse is a security professional as well. The Mitnick book is much more readable for savvy Internet users, while the Meeuwisse book is a bit drier and might be more of a good introduction to the field of information security for managers.]

The root cause of the increased security risks, as I see it (as Meeuwisse helped me to see), is our tendency to trust our data to more and more centralizing organizations (like Facebook, Microsoft, and Apple). This means we trust them not only to control our data to our benefit, but also to get security right. But they can't be expected to get security right precisely because social media and cloud services depend on their ability to access our data. If you want robust security, you must demand absolute privacy. That means that only you own and control your data.

If we were the gatekeepers of our own data (if it were delivered out of our own clouds, via decentralized feeds we control, as open source software and blockchains support), then we wouldn't have nearly so many problems.

Maybe even more fundamental is that there are significant risks—personal, social, and political—to letting corporations (or governments) collectivize us. But precisely that is what has been going on over the last ten years or so.

It's time for us to work a new technological revolution and decentralize, or decollectivize, ourselves. One reason I love working for a blockchain company is that we're philosophically committed to the idea of decentralization, of personal autonomy. But it's still early days for both open source software and blockchain. Much remains to be done to make this technology usable to grandma.

While we're waiting for viable (usable) new solutions, I think the first step is to lock down your cyber-life and help create demand by just getting rid of things like Facebook. You don't have to completely unplug from everything; you have to be hardcore or extreme about your privacy (although I think that's a good idea). You can do what you can, what you're able to do.

I won't blame or think ill of you if you stay on Facebook. I'm just trying to explain why I'm leaving. And I guess I am encouraging you to really start boning up on digital hygiene.

Below, I'm going to link to a series of relevant blog posts that you can explore if you want to follow me out, or just to start thinking more about this stuff.

Also, I hope you'll subscribe yourself to my personal mailing list, which I'll start using more regularly tomorrow. By the way, if you might be interested in some other, more specialized list that I might start based on my interests (such as Everipedia, education, libertarianism, or whatever), please join the big list.

Also note, especially if your email is from Gmail, you will have to check your spam folder for the confirmation mail, if you want to be added. Please move any mails from me and my list out of your spam (or junk) folder into your inbox so Google learns I'm actually not a spammer. :-)


There, that's me being "terse."


How deep should one go into this privacy stuff, anyway?

Probably deeper than you thought. Here's why.

If you are convinced that privacy actually matters, and you really want to lock down your cyber-life, as I am trying to do, there are easy options, like switching to Brave (or Firefox with plugins that harden it for privacy). I've done that. Then there are more challenging but doable options, like switching your email away from Gmail. I've done that. Then there are the hardcore options, like permanently quitting Facebook. I will be doing that later this month.

And then, finally, there are some extreme, weird, bizarre, and even self-destructive options, like completely unplugging—or, less extremely, plunking down significant sums of money on privacy hardware that may or may not work—or that works, but costs a lot. As an illustrative example, we can think about the wonderfully well-meaning company Purism and its charmingly privacy-obsessed products, the Librem 13 and 15 laptops as well as the Librem 5 phone, which is due in April "Q3".

I'm going to use this as an example of the hardcore level, then I'm going to go back to the more interesting broader questions. You can skip the next section if it totally bores you.

Should I take financial risks to support the cause of privacy?

If I sound a little skeptical, it's because I am. Purism is a good example because, on the one hand, it's totally devoted to privacy and 100% open source (OSS), concepts that I love. (By the way, I have absolutely no relationship with them. I haven't even purchased one of their products yet.) Privacy and open source go together like hand in glove, by the way, because developers of OSS avoid adding privacy-violating features. OSS developers tend to be privacy fiends, not least because free software projects offer few incentives to sell your data, while having many incentives to keep it secure. But, as much as I love open source software (like Linux, Ubuntu, Apache, and LibreOffice, to take a few examples) and open content (like Wikipedia and Everipedia), not to mention the promise of open hardware, the quality of such open and free projects can be uneven.

The well-known lack of polish on OSS is mainly because whether a coding or editorial problem is fixed depends on self-directed volunteers. It often helps when a for-profit enterprise gets involved to push things forward decisively (like Everipedia redesigning wiki software and putting Wikipedia's content on the blockchain). Similarly, to be sure, we wouldn't have a prayer of seeing a mass-produced Linux phone without companies like Purism. The company behind Ubuntu, Canonical, tried and failed to make an Ubuntu phone. If they had succeeded, I might own one now.

So there is an interesting dilemma here, I think. On the one hand, I want to support companies like Purism, because they're doing really important work. The world desperately needs a choice other than Apple and Android, and not just any other choice—a choice that respects our privacy and autonomy (or, as the OSS community likes to say, our freedom). On the other hand, if you want to use a Linux phone daily for mission-critical business stuff, then the Librem 5 phone isn't quite ready for you yet.

My point here isn't about the phone (but I do hope they succeed). My point is that our world in 2019 is not made for privacy. You have to change your habits significantly, switch vendors and accounts, accept new expenses, and maybe even take some risks, if you go beyond "hardcore" levels of privacy.

Is it worth it? Maybe you think being even just "hardcore" about privacy isn't worth it. How deep should one go into this privacy stuff, anyway? In the rest of this post, I'll explore this timely issue.

The four levels

I've already written in this blog about why privacy is important. But what I haven't explored is the question of how important it is. It's very important, to be sure, but you can make changes that are more or less difficult. What level of difficulty should you accept: easy, challenging, hardcore, or extreme?

Each of these levels of difficulty, I think, naturally goes with a certain attitude toward privacy. What level are you at now? Have a look:

  1. The easy level. You want to make it a bit harder for hackers to do damage to your devices, your data, your reputation, or your credit. The idea here is that just as it would be irresponsible to leave your door unlocked if you live in a crime-ridden neighborhood, it's irresponsible to use weak passwords and other such things. You'll install a firewall (or, rather, let commercial software do this for you) and virus protection software.—If you stop there, you really don't care if corporations or the government spies on you, at the end of the day. Targeted ads might be annoying, but they're tolerable, you think, and you have nothing to hide from the government. This level is better than nothing, but it's also quite irresponsible, in my opinion. Most people are at this level (at best). The fact that this attitude is so widespread is what has allowed corporations, governments, and criminals to get their claws into us.
  2. The challenging but doable level. You understand that hackers can actually ruin your life, and, in scary, unpredictable circumstances, a rogue corporation or a government could, as well. As unlikely as this might be, we are right to take extra precautions to avoid the worst. Corporate and government intrusions into privacy royally piss you off, and you're ready to do something reasonably dramatic (such as switch away from Gmail), to send a message and make yourself feel better. But you know you'll never wholly escape the clutches of your evil corporate and government overlords. You don't like this at all, but you're "realistic"; you can't escape the system, and you're mostly resigned to it. You just want the real abusers held to account. Maybe government regulation is the solution.—This level is better than nothing. This is the level of the Establishment types who want the government to "do something" about Facebooks abuses, but are only a little bothered by the NSA. I think this level is still irresponsible. If you're ultimately OK with sending your data to Google and Facebook, and you trust the NSA, you're still one of the sheeple who are allowing them to take over the world.
  3. The hardcore level. Now things get interesting. Your eyes have been opened. You know Google and Facebook aren't going to stop. Why would they? They like being social engineers. They want to control who you vote for. They're unapologetic about inserting you and your data into a vast corporate machine. Similarly, you know that governments will collect more of your data in the future, not less, and sooner or later, some of those governments will use the data for truly scary and oppressive social control, just as China is doing. If you're at this level, it's not just because you want to protect your data from criminals. It's because you firmly believe that technology has developed especially over the last 15 years without sufficient privacy controls built in. You demand that those controls be built in now, because otherwise, huge corporations and the largest, most powerful governments in history can monitor us 24/7, wherever we are. This can't end well. We need to completely change the Internet and how it operates.—The hardcore level is not just political, it's fundamentally opposed to the systems that have developed. This is why you won't just complain about Facebook, you'll quit Facebook, because you know that if you don't, you're participating in what what is, in the end, a simply evil system. In other ways, you're ready to lock down your cyber-life systematically. You know what a VPN is and you use one. You would laugh at the idea of using Dropbox. You know you'll have to work pretty hard at this. It's only a matter of how much you can accomplish.
  4. The extreme level. The hardcore level isn't hardcore enough. Of course corporations and governments are using your data to monitor and control you in a thousand big and small ways. This is one of the most important problems of our time. You will go out of your way, on principle and so that you can help advance the technology, to help lock down everybody's data. Of course you use Linux. Probably, you're a computer programmer or some other techie, so you can figure out how to make the bleeding edge privacy software and hardware work. Maybe you help develop it.—The extreme level is beyond merely political. It's not just one cause among many. You live with tech all the time and you demand that every bit of your tech respect your privacy and autonomy; that should be the default mode. You've tried and maybe use several VPNs. You run your own servers for privacy purposes. You use precious little proprietary software, which you find positively offensive. You're already doing everything you can to make that how you interact with technology.

In sum, privacy is can be viewed primarily as a matter of personal safety with no big demands on your time, as a political side-issue that demands only a little of your time, as an important political principle that places fairly serious demands on your time, or as a political principle that is so important that it guides all of your technical choices.

What should be your level of privacy commitment?

Let's get clear, now. I, for example, have made quite a few changes that show something like hardcore commitment. I switched to Linux, replaced Gmail, Chrome, and Google Search, and am mostly quitting privacy-invasive social media. I even use a VPN. The reason I'm making these changes isn't that I feel personally threatened by Microsoft, Apple, Google, and Facebook. It's not about me and my data; I'm not paranoid. It's about a much bigger, systemic threat. It's a threat to all of us, because we have given so much power to corporations and governments in the form of easily collectible data that they control. It really is true that knowledge is power, and that is why these organizations are learning as much about us as they can.

There's more to it than that. If you're not willing to go beyond moderately challenging changes, you're probably saying, "But Larry, why should I be so passionate about...data? Isn't that kind of, you know, wonky and weird? Seems like a waste of time."

Look. The digital giants in both the private and public sectors are not just collecting our data. By collecting our data, they're collectivizing us. If you want to understand the problem, think about that. Maybe you hate how stuff you talked about on Facebook or Gmail, or that you searched for on Google or Amazon, suddenly seem to be reflected by weirdly appropriate ads everywhere. Advertisers and Big Tech are, naturally, trying to influence you; they're able to do so because you've agreed to give your data to companies that aggregate it and sell it to advertisers. Maybe you think Russia was able to influence U.S. elections. How would that have been possible, if a huge percentage of the American public were not part of one centralized system, Facebook? Maybe you think Facebook, YouTube, Twitter, and others are outrageously biased and are censoring people for their politics. That's possible only because we've let those companies manage our data, and we must use their proprietary protocols if we want to use it. Maybe you're concerned about China hacking and crippling U.S. computers. A big part of the problem is that good security practices have been undermined by lax privacy practices.

In every case, the problem ultimately is we don't care enough about privacy. We've been far too willing to place control of our data in the hands of the tech giants who are only too happy to take it off our hands, in exchange for "services."

Oh, we're serviced, all right.

In these and many, many more cases, the root problem is that we don't hold the keys—they do. Our obligation, therefore, is to take back the keys.

Fortunately, we are still able to. We can create demand for better systems that respect our privacy. We don't have to use Facebook, for example. We can leave en masse, creating a demand for a decentralized system where we each own and control how our data is distributed, and the terms on which we see other people's data. We don't have to leave these important decisions in the hands of creeps like Mark Zuckerberg. We can use email, mailing lists, and newer, more privacy-respecting platforms.

To take another example, we don't have to use Microsoft or Apple to run our computers. While Apple is probably better, it's still bad; it still places many important decisions in the hands of one giant, powerful company, that will ultimately control (and pass along) our data under confusing terms that we must agree to if we are to use their products. Because their software is proprietary and closed-source, when we use their hardware and services, we simply have to trust that what happens to it after we submit it will be managed to our benefit.

Instead of these top-down, controlling systems, we could be using Linux, which is much, much better than it was 15 years ago.

By the way, here's something that ought to piss you off: smart phones are the one essential 21st-century technology where you have no free, privacy-respecting option. It's Apple or Google (or Microsoft, with its moribund Windows Phone). There still isn't a Linux phone. So wish Purism luck!

We all have different political principles and priorities, of course. I personally am not sure where privacy stacks up, precisely, against the many, many other principles there are.

One thing is very clear to me: privacy is surprisingly important, and more important than most people think it is. It isn't yet another special, narrow issue like euthanasia, gun control, or the national debt. It is broader than those. Its conceptual cousins are broad principles like freedom and justice. This is because privacy touches every aspect of information. Digital information has increasingly become, in the last 30 years, the very lifeblood of so much of our modern existence: commerce, socialization, politics, education, entertainment, and more. Whoever controls these things controls the world.

That, then, is the point. We should care about privacy a lot—we should be hardcore if not extreme about it—because we care about who controls us, and we want to retain control over ourselves. If you want to remain a democracy, if you don't want society itself to become an appendage of massive corporate and government mechanisms, by far the most powerful institutions in history, then you need to start caring about privacy. That's how important it is.

Privacy doesn't mainly have to do with hiding our dirty secrets from neighbors and the law. It mainly has to do with whether we must ask anyone's permission to communicate, publish, support, oppose, purchase, compensate, save, retrieve, and more. It also has to do with whether we control the conditions under which others can access our information, including information about us. Do we dictate the terms under which others can use all this information that makes up so much of life today, or does some central authority do that for us?

Whoever controls our information controls those parts of our lives that are touched by information. The more of our information is in their hands, the more control they have over us. It's not about secrecy; it's about autonomy.


Part of a series on how I'm locking down my cyber-life.


Join me on my new friends list

> Go here to subscribe <

My theory is that people have a hard time keeping away from Facebook because Facebook scratches a certain kind of online socialization itch. Well, since I'm leaving Facebook on #DeletionDay (Feb. 18), I reasoned, I should provide another outlet for that socialization behavior. But I wanted to be in control, and I didn't want anybody's privacy violated (especially mine). So I made a mailing list! I actually installed it myself, on my own bought-and-paid-for Internet space, and you're all welcome to my party/salon/hoedown.

UPDATE: If you tried but failed to subscribe, because you didn't get a confirmation mail, will you please try again? The sanger.io domain is now properly authenticated, so mails from it should now go to your inbox rather than spam folder. (Of course, still check the spam folder if it doesn't come to your inbox.)


Why I quit Quora and Medium for good

It's not a temporary rage-quit; I've deleted both accounts. I have zero followers, no content, and no username. I'm outta there.

This is going to be more interesting than it sounds, I promise.

When I first joined Quora in 2011, I loved it, with a few small reservations. Then, after some run-ins with what I regarded as unreasonable moderation, I started to dislike it; I even temporarily quit in 2015. Then the events of 2018 gave me a new perspective on social media in general. I re-evaluated Quora again, and found it wanting. So I deleted my account today, for good. All my followers and articles are gone.

I went through a similar process with Medium two weeks ago.

Why? Glad you asked.

Digital sharecropping

Until maybe 2012 or so, if you had asked me, I would have said that I am a confirmed and fairly strict open source/open content/open data guy, and the idea of people happily developing content, without a financial or ownership stake, to benefit a for-profit enterprise had always bothered me. It bothered me in 2000 when Jimmy Wales said the job he hired me for—to start a new encyclopedia—would involve asking volunteers to developed free content hosted by a for-profit company (Bomis). I was happy when, in 2003, the Bomis principals gave Wikipedia to a non-profit.

(Ironically, not to mention stupidly, in 2011 Jimmy Wales tried to blame me for Bomis' original for-profit, ad-based business model. Unfortunately for his lie, I was able to find evidence that, in fact, it had been his idea.)

In 2006, technology journalist Nicholas Carr coined the phrase "digital sharecropping", saying that "Web 2.0,"

by putting the means of production into the hands of the masses but withholding from those same masses any ownership over the product of their work, provides an incredibly efficient mechanism to harvest the economic value of the free labor provided by the very many and concentrate it into the hands of the very few.

This bothers me. I'm a libertarian and I support capitalism, but the moral recommendability of building a business on the shoulders of well-meaning volunteers and people merely looking to socialize online struck me, as it did Carr, as very questionable. I even remember writing an old blog post (can't find it anymore) in which I argued, only half-seriously, that this practice is really indefensible, particularly if users don't have a governance stake.

The moral recommendability of building a business on the shoulders of well-meaning volunteers and people merely looking to socialize online struck me as very questionable.

The rise of social media, and joining Quora and Medium

By 2010, despite having been an active Internet user for over 15 years, my perspective started changing. I didn't really begrudge Facebook, Twitter, or YouTube their profits anymore. The old argument that they are providing a useful service that deserves compensation—while still a bit questionable to me—made some sense. As to the rather obvious privacy worries, at that stage they were mainly just worries. Sure, I knew (as we all did) that we were trusting Facebook with relatively sensitive data. I was willing to give them the benefit of the doubt. (That sure changed.)

If you were plugged in back then, you regularly joined new communities that seemed interesting and happening. Quora was one; I joined it in 2011. It struck me as a somewhat modernized version of the old discussion communities we had in the 1990s—Usenet and mailing lists—but, in some ways, even better. There was very lightweight moderation, which actually seemed to work. A few years later I joined Medium, and as with Quora, I don't think I ever heard from their moderators in the first few years. If I did, I was willing to admit that maybe I had put a toe over the line.

Within a few days, Quora actually posted a question for me to answer: "What does Larry Sanger think about Quora?" Here is my answer in full (which I've deleted from Quora along with all my other answers):

Uhh...I didn't ask this.  It's a bit like fishing for compliments, eh Quora team? But that's OK, I am happy to compliment Quora on making a very interesting, engaging website.

Quora is pretty interesting. It appeals to me because there are a lot of people here earnestly reflecting--this I think must be partly due to good habits started by the first participants, but also because the question + multiple competing answers that mostly do not respond to each other means there is more opportunity for straightforward reflection and less for the usual bickering that happens in most Internet communities.

A long time ago (I'm sure one could find this online somewhere, if one looked hard enough) I was musing that it's odd that mailing lists are not used in more ways than they are. It seemed to me that one could use mailing list software to play all sorts of "conversation games," and I didn't know why people didn't set up different sorts of rule systems for different kinds of games.

What impresses me about Quora is that it seems to be a completely new species of conversation game.  Perhaps it's not entirely new, because it's somewhat similar to Yahoo! Answers, but there aren't as many yahoos on Quora, for whatever reason, and other differences are important.  Quora's model simply works better.  Quora users care about quality, and being deep, and Yahoo! Answerers generally do not.  I wonder why that is.

But unlike Yahoo! Answers, Quora doesn't seem to be used very much for getting factual information. Quora users are more interested in opinionizing about broad, often philosophical questions, which I find charming and refreshing. But for this reason, it's not really a competitor of Wikipedia or Yahoo! Answers (or Citizendium...). It's competing with forums.

I think it needs some more organizational tools, tools that make it less likely that good questions and answers aren't simply forgotten or lost track of. Or maybe there already are such tools and I don't know about them.

As I re-read this, some points have taken on a new meaning. I chalked up Quora's failure to provide more robust search tools to it being at a relatively early stage (it was started in two years earlier by a former Facebook CTO), and the ordinary sort of founder stubbornness, in which the founders have a vision of how a web app should work, and as a result don't give the people what they actually want. I see now that they had already started to execute a new approach to running a website that I just didn't recognize at the time. It was (and is) very deliberately heavy-handed and top-down, like Facebook. They let you see what they want you to see. They try to "tailor" the user experience. And clearly, they do this not to satisfy explicit user preferences. They don't care much about user autonomy. Their aim is apparently to keep users on the site, to keep them adding content. If you choose to join, you become a part of their well-oiled, centrally managed machine.

Quora and Medium, like Facebook, Twitter, and YouTube, make it really hard for you to use their sites on your own terms, with your own preferences. You're led by the hand and kept inside the rails. Before around 2008, nobody could imagine making a website like that. Well, they existed, but they were for children and corporations.

I could see this, of course. But all the big social media sites were the same way. I guess I tolerated what looked like an inevitable takeover of the once-decentralized Internet by a more corporate mindset. I suppose I hoped that this mindset wouldn't simply ruin things. By 2012, I was already deeply suspicious of how things were turning out.

But now it's just blindingly obvious to me that the Silicon Valley elite have ruined the Internet.

Increasingly heavy-handed and ideological "moderation"

Maybe the first or second times I heard from Quora's moderation team, I was merely annoyed, but I still respected their attempts to keep everything polite. I thought that was probably all it was. That's what moderation used to be, anyway, back when we did it in the 90s and 00s. But I noticed that Quora's moderation was done in-house. That struck me as being, well, a little funny. There was something definitely off about it. Why didn't they set some rules and set up a fair system in which the community effectively self-moderated? They obviously had decent coders and designers who could craft a good community moderation system. But they didn't...

I see now only too well that the reason was that they wanted moderation to be kept in house, and not just because it was important to get right; it was because they wanted to exert editorial control. At first, it seemed that they had business reasons for this, which I thought was OK, maybe. But as time went on and as I got more moderation notices for perfectly fair questions and polite comments, it became clear that Quora's moderation practices weren't guided merely by the desire to keep the community pleasant for a wide cross-section of contributors. They were clearly enforcing ideological conformity. This got steadily worse and worse, in my experience, until I temporarily quit Quora in 2015, and I never did contribute as much after that.

Similarly, Medium's moderators rarely if ever bothered me, until they took down a rather harsh comment I made to a pedophile who was defending pedophilia. (He was complaining about an article I wrote explaining why pedophilia is wrong. I also wrote an article about why murder is wrong.) I hadn't been sufficiently polite to the pedophile, it seems. So, with only the slenderest explanations, Medium simply removed my comment. That's what caused me to delete my Medium account.

They don't care much about user autonomy. Their aim is apparently to keep users on the site, to keep them adding content. If you choose to join, you become a part of their well-oiled, centrally managed machine.

You don't have to agree with my politics to agree that there is a problem here. My objection is not just about fairness; it's about control. It's about the audacity of a company, which is profiting from my unpaid content, also presuming to control me, and often without explaining their rather stupid decisions. It's also not about the necessity of moderation. I've been a moderator many times in the last 25 years, and frankly, Internet communities suck if they don't have some sort of moderation mechanism. But when they start moderating in what seems to be an arbitrary and ideological way, when it's done in-house in a wholly opaque way, that's just not right. Bad moderation used to kill groups. People would leave badly-moderated groups in droves.

Lack of intellectual diversity in the community

Being on the web and not artificially restricted by nationality, Quora and Medium do, of course, a global user base. But they are single communities. And they're huge; they're both in the top 250. So whatever answer most users vote up (as filtered by Quora's secret and ever-changing sorting algorithm), and whoever is most popular with other Quora voters, tends to be shown higher.

Unsurprisingly—this was plainly evident back in 2011—Quora's community is left-leaning. Medium is similar. That's because, on average, intellectual Internet writers are left-leaning. I didn't really have a problem with that, and I wouldn't still, if we hadn't gotten absolutely stunning and clear evidence in 2018 that multiple large Internet corporations openly and unashamedly use their platforms to put their thumbs on the scales. They simply can't be trusted as fair, unbiased moderators, particularly when their answer ranking algorithms and the moderation policies and practices are so opaque.

In addition, a company like Quora should notice that different cultures have totally different ways of answering life's big questions. The differences are fascinating, too. By lumping us all together, regardless of nationality, religion, politics, gender, and other features, we actually miss out on the full variety of human experience. If the Quora community's dominant views aren't copacetic to you, you'll mostly find yourself in the cold, badly represented and hard to find.

Silicon Valley, your experiment is over

Look. Quora, like Medium, Facebook, Twitter, YouTube, and others, have been outed as shamelessly self-dealing corporations. It's gone way beyond "digital sharecropping." The problem I and many others have with these companies isn't just that they are profiting from our unpaid contributions. It's that they have become ridiculously arrogant and think they can attempt to control and restrict our user experience and our right to speak our minds under fair, reasonable, and transparent moderation systems. And while the privacy issues that Quora or Medium have aren't as profound as for Facebook, they are there, and they come from the same controlling corporate mindset.

So that's why I've quit Quora and Medium for good. I hope that also sheds more light on why I'm leaving Facebook and changing how I use Twitter.

As if to confirm me in my decision, Quora doesn't supply any tools for exporting all your answers from the site. You have to use third-party tools (I used this). And after I deleted my account (which I did just now), I noticed that my account page and all my answers were still there. The bastards force you to accept a two-week "grace period," in case you change your mind. What if I don't want them to show my content anymore, now? Too bad. You have to let them continue to earn money from your content for two more weeks.

Clearly, they aren't serving you; you're serving them.

We've been in an experiment. Many of us were willing to let Internet communities be centralized in the hands of big Silicon Valley corporations. Maybe it'll be OK, we thought. Maybe the concentration of money and power will result in some really cool new stuff that the older, more decentralized Internet couldn't deliver. Maybe they won't mess it up, and try to exert too much control, and abuse our privacy. Sure! Maybe!

The experiment was a failure. We can't trust big companies, working for their own profit, to make good decisions for large, online communities. The entire industry has earned and richly deserves our distrust and indignation.

So, back to the drawing board. Maybe we'll do better with the next, more robustly decentralized and democratic phase of the Internet: blockchain.

We'll get this right eventually, or die trying. After all, it might take a while.

We've been in an experiment. Many of us were willing to let Internet communities be centralized in the hands of big Silicon Valley corporations. Maybe it'll be OK, we thought. ...

The experiment was a failure.