Zuckerberg Is Wrong: Don't Regulate Our Content

Last Sunday, Mark Zuckerberg made another Facebook strategy post. (This is his second major policy post in as many months. I responded to his March 6 missive as well.) Unsurprisingly, it was a disaster.

I want to shake him by his lapels and say, "Mark! Mark! Wrong way! Stop going that way! We don't want more snooping and regulation by giant, superpowerful organizations like yours and the U.S. government! We want less!"

He says he has spent two years focused on "issues like harmful content, elections integrity and privacy." If these have been the focuses of someone who is making motions to regulate the Internet, it's a good idea to stop and think a bit about each one. They are a mixed bag, at best.

1. Zuckerberg's concerns

Concern #1: "Harmful content"

Zuckerberg's glib gloss on "harmful content" is "terrorist propaganda, hate speech and more." Applying the modifier "harmful" to "content" is something done mainly by media regulators, giant corporations like Facebook, and the social justice left. Those of us who still care about free speech—and I think that's most of us—find the phrase not a little chilling.

Let's be reasonable, though. Sure, on the one hand, we can agree that groups using social media to organize dangerously violent terrorism, or child pornography, or other literally harmful and illegal activity, for example, should be shut down. And few people would have an issue with Facebook removing "hate speech" in the sense of the KKK, Stormfront, and other openly and viciously racist outfits. That sort of thing was routinely ousted from more polite areas of the Internet long ago, and relegated to the backwaters. That's OK with me. Reasonable and intellectually tolerant moderation is nothing new.

On the other hand, while all of that can perhaps be called "harmful content," the problem is how vague the phrase is. How far beyond such categories of more uncontroversially "harmful" content might it extend? It does a tiny bit of harm if someone tells a small lie; is that "harmful content"? Who knows? What if someone shares a conservative meme? That's sure to seem harmful to a large minority of the population. Is that a target? Why not progressive memes, then? Tech thought leaders like Kara Swisher would ban Ben Shapiro from YouTube, if she could; no doubt she finds Shapiro deeply harmful. Is he fair game? How about "hateful" atheist criticisms of Christianity—surely that's OK? But how about similarly "hateful" atheist criticisms of Islam? Is the one, but not the other, "harmful content"?

This isn't just a throwaway rhetorical point. It's deeply important to think about and get right, if we're going to use such loaded phrases as "harmful content" seriously, unironically, and especially if there is policymaking involved.

The problem is that the sorts of people who use phrases like "harmful content" constantly dodge these important questions. We can't trust them. We don't know how far they would go, if given a chance. Indeed, anyone with much experience debating can recognize instantly that the reason someone would use this sort of squishy phraseology is precisely because it is vague. Its vagueness enables the motte-and-bailey strategy: there's an easily-defended "motte" (tower keep) of literally harmful, illegal speech, on the one hand, but the partisans using this strategy really want to do their fighting in the "bailey" (courtyard) which is riskier but offers potential gains. Calling them both "harmful content" enables them to dishonestly advance repressive policies under a false cover.

"Hate speech" functions in a similar way. Here the motte is appallingly, strongly, openly bigoted speech, which virtually everyone would agree is awful. But we've heard more and more about hate speech in recent years because of the speech in the bailey that is under attack: traditional conservative and libertarian positions and speakers that enfuriate progressives. Radicals call them "racists" and their speech "hate speech," but without any substantiation.

It immediately raises a red flag when one of the most powerful men in the world blithely uses such phraseology without so much as a nod to its vagueness. Indeed, it is unacceptably vague.

Concern #2: Elections integrity

The reason we are supposed to be concerned about "elections integrity," as one has heard ad nauseam from mainstream media sources in the last couple years, is that Russia caused Trump to be elected by manipulating social media. This always struck me as being a bizarre claim. It is a widely-accepted fact that some Russians thought it was a good use of a few million dollars to inject even more noise (not all of it in Trump's favor) into the 2016 election by starting political groups and spreading political memes. I never found this particularly alarming, because I know how the Internet works: everybody is trying to persuade everybody, and a few million dollars from cash-strapped Russians is really obviously no more than shouting in the wind. What is the serious, fair-minded case that it even could have had any effect on the election? Are they so diabolically effective at propaganda to influence elections that, with a small budget, they can actually throw it one way or another? And if so, don't you think that people with similar magically effective knowhow would be on the payroll of the two most powerful political parties in the world?

Concern #3: Privacy

As to privacy—one of my hobby horses of late—Zuckerberg's concern is mainly one of self-preservation. After all, this is the guy who admitted that he called you and me, who trusted him with so much of our personal information, "dumb f--ks" for doing so. This is a guy who has built his business by selling your privacy to the highest bidder, without proposing any new business model. (Maybe they can make enough through kickbacks from the NSA, which must appreciate how Facebook acts as an unencrypted mass surveillance arm.)

Mark Zuckerberg has absolutely no credibility on this issue, even when describing his company's own plans.

He came out last month with what he doubtless wanted to appear to be a "come-to-Jesus moment" about privacy, saying that Facebook will develop the ultimate privacy app: secret, secured private chatting! Oh, joy! Just what I was missing (um?) and always wanted! But even that little bit (which is a very little bit) was too much to hope for: he said that maybe Facebook wouldn't allow total, strong, end-to-end encryption, because that would mean they couldn't "work with law enforcement."

The fact, as we'll see, that he wants the government to set privacy rules means that he still doesn't care about your privacy, for all his protestations.

Zuckerberg's declared motives are dodgy-to-laughable. But given his recommendation—that the government start systematically regulating the Internet—you shouldn't have expected anything different.

2. Mark Zuckerberg wants the government to censor you, so he doesn't have to.

Zuckerberg wants to regulate the Internet

In his previous missive, Zuckerberg gave some lame, half-hearted ideas about what Facebook itself would do to shore up Facebook's poor reputation for information privacy and security. Not so this time. This time, he wants government to take action: "I believe we need a more active role for governments and regulators." But remember, American law strives for fairness, so these wouldn't be special regulations just for Facebook. They would be regulations for the entire Internet.

"From what I've learned," Zuckerberg declares, "I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability."

When Zuckerberg calls for regulation of the Internet, he doesn't discuss hardware—servers and routers and fiber-optic cables, etc. He means content on the Internet. When it comes to "harmful content and election integrity," he clearly means some harmful and spurious content that has appeared on, e.g., Facebook. When he talks about "privacy and data portability," he means the privacy and portability of your content.

So let's not mince words: to regulate the Internet in these four areas is tantamount to regulating content, i.e., expression of ideas. That suggests, of course, that we should be on our guard against First Amendment violations. It is one thing for Facebook to remove (just for example) videos from conservative commentators like black female Trump supporters Diamond and Silk, which Facebook moderators called "unsafe." It's quite another thing for the federal government to do such a thing.

Zuckerberg wants actual government censorship

Now, before you accuse me of misrepresenting Zuckerberg, look at what his article says. It says, "I believe we need a more active role for governments and regulators," and in "four areas" in particular. The first-listed area is "harmful content." So Zuckerberg isn't saying, here, that it is Facebook that needs to shore up its defenses against harmful content. Rather, he is saying, here, that governments and regulators need to take action on harmful content. "That means deciding what counts as terrorist propaganda, hate speech and more." And more.

He even brags that Facebook is "working with governments, including French officials, on ensuring the effectiveness of content review systems." Oh, no doubt government officials will be only too happy to "ensure" that "content review systems" are "effective."

Now, in the United States, terrorist propaganda is already arguably against the law, although some regret that free speech concerns are keeping us from going far enough. Even there, we are right to move slowly and carefully, because a too-broad definition of "terrorist propaganda" might well put principled, honest, and nonviolent left- and right-wing opinionizing in the crosshairs of politically-motivated prosecutors.

But "deciding what counts as...hate speech" is a matter for U.S. law? Perhaps Zuckerberg should have finished his degree at Harvard, because he seems not to have learned that hate speech is unregulated under U.S. law, because of a little thing called the First Amendment to the U.S. Constitution. As recently as 2017, the Supreme Court unanimously struck down a "disparagement clause" in patent law which had said that trademarks may not "disparage...or bring...into contemp[t] or disrepute" any "persons, living or dead." This is widely regarded as demonstrating that there is no hate speech exception to the First Amendment. As the opinion says,

Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express “the thought that we hate.” 

The trouble with the phrase "hate speech" lies in both the ambiguity and the vagueness of the word "hate" itself. "Hate speech" in its core sense (this is the motte) is speech that is motivated by the speaker's own bigoted hatred, but in an ancillary sense (this is the bailey), it means speech that we hate, because in our possibly incorrect opinion we think it is motivated by bigotry (but maybe it isn't). The phrase "hate speech" is also vague and useless because hate comes in degrees, with shifting objects. If I am irritated by Albanians and very mildly diss them, am I guilty of hate speech? Maybe. Jews? Almost certainly. What about white male southerners? Well, what's the answer there? And what if I really strongly hate a group that it is popular to hate, e.g., rapists?

There's much more to be said about this phrase, but here's the point. If government and regulators took Zuckerberg's call for hate speech legislation to heart, what rules would they use? Wouldn't they, quite naturally, shift according to political and religious sentiments? Wouldn't such regulations become a dangerous political football? Would there be any way to ensure it applies fairly across groups—bearing in mind that there is also a Fourteenth Amendment that legally requires such fairness? Surely we don't want the U.S. legal system subject to the same sort of spectacle that besets Canada and the U.K., in which people are prosecuted for criticizing some groups, while very similar criticism of other, unprotected groups goes unpunished?

But precisely that is, presumably, what Zuckerberg wants to happen. He doesn't want to be responsible for shutting down the likes of Diamond and Silk, or Ben Shapiro. That, he has discovered, is an extremely unpopular move; but he's deeply concerned about hate speech; so he would much rather the government do it.

If you want to say I'm not being fair to Zuckerberg or to those who want hate speech laws in the U.S., that of course you wouldn't dream of shutting down mainstream conservatives like this, I point you back to the motte and bailey. We, staunch defenders of free speech, can't trust you. We know about motte and bailey tactics. We know that, if not you, then plenty of your left-wing allies in government and media—who knows, maybe Kara Swisher—would advocate for government shutting down Ben Shapiro. That would be a win. The strategy is clear: find the edgiest thing he has said, label it "hate speech," and use it to argue that he poses a danger to others on the platform, so he should be deplatformed. Or just make an example of a few others like him. That might be enough for the much-desired chilling effect.

Even if you were to come out with an admirably clear and limited definition of "hate speech," which does not include mainstream conservatives and which would include some "hateful," extreme left-wing speech, that wouldn't help much. If the government adopted such "reasonable" regulations, it would be cold comfort. Once the cow has left the barn, once any hate speech law is passed, it's all too easy for someone to make subtle redefinitions of key terms to allow for viewpoint censorship. Then it's only a matter of time.

It's sad that it has come to this—that one of the most powerful Americans in the world suggests that we use the awesome power of law and government to regulate speech, to shut down "hate speech," a fundamentally obscure weasel word that can, ultimately, be used to shut down any speech we dislike—which after all is why the word is used. It's sad not only that this is what he has suggested, but that I have to point it out, and that it seems transgressive to, well, defend free speech. But very well then, I'll be transgressive; I'd say that those who agree with me now have an obligation to be transgressive in just this way.

We can only hope that, with Facebook executives heading for the exits and Facebook widely criticized, Zuckerberg's entirely wrongheaded call for (more) censorship will be ignored by federal and state governments. Don't count on it, though.

But maybe, censorship should be privatized

Facebook is also, Zuckerberg says, "creating an independent body so people can appeal our decisions." This is probably a legal ploy to avoid taking responsibility for censorship decisions, which would make it possible to regulate Facebook as a publisher, not just a platform. Of course, if the DMCA were replaced by some new regulatory framework, then Facebook might not have to give up control, because under the new framework, viewpoint censorship might not make them into publishers.

Of course, whether in the hands of a super-powerful central committee such as Zuckerberg is building, a giant corporation, or the government, we can expect censorship decisions to be highly politicized, to create an elite of censors and rank-and-file thought police to keep us plebs in line. Just imagine if all of the many conservative pages and individuals temporarily blocked or permanently banned by Facebook had to satisfy some third party tribunal.

One idea is for third-party bodies [i.e., not just one for Facebook] to set standards governing the distribution of harmful content and measure companies against those standards. Regulation could set baselines for what's prohibited and require companies to build systems for keeping harmful content to a bare minimum.

Facebook already publishes transparency reports on how effectively we're removing harmful content. I believe every major Internet service should do this quarterly, because it's just as important as financial reporting. Once we understand the prevalence of harmful content, we can see which companies are improving and where we should set the baselines.

There's a word for such "third-party bodies": censors.

The wording is stunning. He's concerned about "the distribution" of content and wants judged "measured" against some "standards." He wants content he disapproves of not just blocked, but kept to a "bare minimum." He wants to be "effective" in "removing harmful content." He really wants to "understand the prevalence of harmful content."

This is not the language that someone who genuinely cares about "the freedom for people to express themselves" would use.

3. The rest of the document

I'm going to cover the rest of the document much more briefly, because it's less important.

Zuckerberg favors regulations to create "common standards for verifying political actors," i.e., if you want to engage in political activity, you'll have to register with Facebook. This is all very vague, though. What behavior, exactly, is going to be caught in the net that's being weaved here? Zuckerberg worries that "divisive political issues" are the target of "attempted interference." Well, yes—well spotted there, political issues sure can be divisive! But it isn't their divisiveness that Facebook or other platforms should try to regulate; it is the "interference" by foreign government actors. What that means precisely, I really wonder.

Zuckerberg's third point is that we need a "globally harmonized framework" for "effective privacy and data protection." Well, that's music to my ears. But it's certainly rich, the very notion that the world's biggest violator of privacy, indeed the guy whose violations are perhaps the single biggest cause of widespread concern about privacy, wants privacy rights protected.

He wants privacy rights protected the way he wants free speech protected. I wouldn't believe him.

Zuckerberg's final point is another that you might think would make me happy: "regulation should guarantee the principle of data portability."

Well. No. Code should guarantee data portability. Regulation shouldn't guarantee any such thing. I don't trust governments, in the pockets of "experts" in the pay of giant corporations, to settle the rules according to which data is "portable." They might, just for instance, write the rules in such a way that gives governments a back door into what should be entirely private data.

Beware social media giants bearing gifts.

And portability, while nice, is not the point. Of course Zuckerberg is OK with the portability of data, i.e., allowing people to more easily move it from one vendor to another. But that's a technical detail of convenience. What matters, rather, is whether I own my data and serve it myself to my subscribers, according to rules that I and they mutually agree on.

But that is something that Zuckerberg specifically can't agree to, because he's already told you that he wants "hate speech and more" to be regulated. By the government or by third party censors.

You can't have it both ways, Zuckerberg. Which is it going to be: data ownership that protects unfettered free speech, or censorship that ultimately forbids data ownership?


How to decentralize social media—a brief sketch

The problem about social media is that it is centralized. Centralization empowers massive corporations and governments to steal our privacy and restrict our speech and autonomy.

What should exist are neutral, technical standards and protocols, like the standards and protocols for blogs, email, and the Web. Indeed, many proposed standards already do exist, but none has emerged as a common, dominant standard. Blockchain technology—the technology of decentralization—is perfect for this, but not strictly necessary. Common protocols would enable us to follow public feeds no matter where they are published. We would eventually have our pick of many different apps to view these feeds. We would choose our own terms, not Facebook's or Twitter's, for both publishing and reading.

As things are, if you want to make short public posts to the greatest number of people, you have to go to Twitter, enriching them and letting them monetize your content (and your privacy). Similarly, if you want to make it easy for friends and family to follow your more personal text and other media, you have to go to Facebook. Similarly for various other kinds of content. It just doesn't have to be that way. We could decentralize.

This is a nice dream. But how do we make it happen?

After all, the problem about replacing the giant, abusive social media companies is that you can't replace existing technology without making something so much more awesome that everyone will rush to try it. And the social media giants have zillions of the best programmers in the world. How can we, the little guys, possibly compete?

Well, I've thought of a way the open source software and blockchain communities might actually kick the legs out from under the social media giants. My proposal (briefly sketched) has five parts. The killer feature, which will bring down the giants, is (4):

  1. The open data standards. Create open data standards and protocols, or probably just adopt the best of already-existing ones, for the feeds of posts (and threads, and other data structures) that Twitter, Facebook, etc., uses. I'm not the first to have thought of this; the W3C has worked on the problem. It'd be like RSS, but for various kinds of social media post types.
  2. The publishing/storage platforms. Create reliable ways for people to publish, store, and encrypt (and keep totally secret, if they want) their posts. Such platforms would allow users to control exactly who has access to what content they want to broadcast to the world, and in what form, and they would not have to ask permission from anyone and would not be censorable. (Blockchain companies using IPFS, and in particular Everipedia, could help here and show the way; but any website could publish feeds.)
  3. The feed readers. Just as the RSS standard spawned lots of "reader" and "aggregator" software, so there should be similar feed readers for the various data standards described in (1) and the publishers described in (2). While publishers might have built-in readers (as the social media giants all do), the publishing and reading feature sets need to be kept independent, if you want a completely decentralized system.
  4. The social media browser plugins. Here's the killer feature. Create at least one (could be many competing) browser plugins that enable you to (a) select feeds and then (b) display them alongside a user's Twitter, Facebook, etc., feeds. (This could be an adaptation of Greasemonkey.) In other words, once this feature were available, you could tell your friends: "I'm not on Twitter. But if you want to see my Tweet-like posts appear in your Twitter feed, then simply install this plugin and input my feed address. You'll see my posts pop up just as if they were on Twitter. But they're not! And we can do this because you can control how any website appears to you from your own browser. It's totally legal and it's actually a really good idea." In this way, while you might never look at Twitter or Facebook, you can stay in contact with your friends who are still there—but on your own terms.
  5. The social media feed exporters/APIs. Create easy-to-use software that enables people to publish their Twitter, Facebook, Mastodon, Diaspora, Gab, Minds, etc., feeds via the open data standards. The big social media companies already have APIs, and some of the smaller companies and open projects have standards, but there is no single, common open data standard that everyone uses. That needs to change. If you could publish your Twitter data in terms of such a standard, that would be awesome. Then you could tell your friends: "I'm on Twitter, but I know you're not. You don't have to miss out on my tweets. Just use a tweet reader of your choice (you know—like an old blog/RSS feed reader, but for tweets) and subscribe to my username!

The one-two punch here is the combination of points (1) and (4): First, we get behind decentralized, common social media standards and protocols, and then we use those standards when building plugins that let our friends, who are still using Facebook and Twitter (etc.), see posts that we put on websites like Steemit, Minds, Gab, and Bitchute (not to mention coming Everipedia Network dapps).

The exciting thing about this plan is that no critical mass seems to be needed in order to get people to install the envisioned plugin. All you need is one friend whose short posts you want to see in your Twitter feed, and you might install a plugin that lets you do that. As more and more people do this, there should be a snowball effect. Thus, even a relatively small amount of adoption should create a movement toward decentralization. And then the days of centralized social media will be numbered. We'll look back on the early days of Facebook and Twitter (and YouTube!) as we now do the Robber Barons.

We can look at a later iteration of Everipedia itself as an example. Right now, there is one centralized encyclopedia: Wikipedia. With the Everipedia Network, there will be a protocol that will enable people from all over the web to participate in a much broader project.

I would love to see the various competitors of the social media giants settle on a common standard and otherwise join forces on these sorts of projects. If they do, it will happen, and the days of privacy-stealing, centralized, controlling, Big Brother social media will soon be behind us. We'll return to the superior and individually empowering spirit of the original Internet.

We have to do this, people. This is the future of the Internet. Even if you've given up social media, we should build this for our friends and family who are still toiling in the digital plantations.


My Facebook #DeletionDay goodbye message

Here's what I posted as my last long message to Facebook.


Folks, as previously announced, tomorrow will be my #DeletionDay for Facebook. It'll be the last day I'll post here, and I'll begin the process for the permanent removal of my account. (Among other things, I'll make a copy of my data and my friends list.) I'm sorry to those who want me to stay, but there are too many reasons to quit.

Let me explain again, more tersely, why I'm quitting.

You probably already know that I think this kind of social media, as fun as it undoubtedly can be, undermines relationships, wastes our time, and distracts us. I also agree, as one guy can be seen saying on virally-shared videos, that social media is particularly bad for kids. All I can say is, it's just sad that all that hasn't been enough for me (and most of us) to quit.

But in 2018, it became all too clear that Big Tech—which is now most definitely a thing—is cynically and strongly committed to using social media as a potent tool of political control, which it certainly is. They like having that power. For companies like Google, Facebook, and Apple, reining in wrongthink is a moral imperative. And they're doing the bidding of the Establishment when they do so. It's very scary, I think.

The only thing that gives them this awesome power over us and our free, voluntary conversations is that we have given them that power. But notice the thing that empowers them: we give them our data to manage. It's not really ours. They take it, sell it to advertisers, repackage it, and show it back to us in ways they control. And they can silence us if they like. That's because we have sold our privacy to them for convenience and fun. We're all what Nick Carr aptly called "digital sharecroppers." I now think it's a terrible deal. It's still voluntary, thank goodness; so I'm opting out.

Another thing is that I started reading a book called Cybersecurity for Beginners (no, I'm not too proud to read a book called that) by Raef Meeuwisse, after my phone (and Google account and Coinbase) were hacked. This finally opened my eyes to the very close connection between privacy and security. Meeuwisse explains that information security has become much more complex than it was in the past, what with multiple logins, multiple (interconnected) devices, multiple (interconnected) cloud services, and in short multiple potential points of failure in multiple layers.

[Adding now: Someone recommended, and I bought and started reading, another good privacy book called The Art of Invisibility by Kevin Mitnick. Mitnick is a famous hacker. Meeuwisse is a security professional as well. The Mitnick book is much more readable for savvy Internet users, while the Meeuwisse book is a bit drier and might be more of a good introduction to the field of information security for managers.]

The root cause of the increased security risks, as I see it (as Meeuwisse helped me to see), is our tendency to trust our data to more and more centralizing organizations (like Facebook, Microsoft, and Apple). This means we trust them not only to control our data to our benefit, but also to get security right. But they can't be expected to get security right precisely because social media and cloud services depend on their ability to access our data. If you want robust security, you must demand absolute privacy. That means that only you own and control your data.

If we were the gatekeepers of our own data (if it were delivered out of our own clouds, via decentralized feeds we control, as open source software and blockchains support), then we wouldn't have nearly so many problems.

Maybe even more fundamental is that there are significant risks—personal, social, and political—to letting corporations (or governments) collectivize us. But precisely that is what has been going on over the last ten years or so.

It's time for us to work a new technological revolution and decentralize, or decollectivize, ourselves. One reason I love working for a blockchain company is that we're philosophically committed to the idea of decentralization, of personal autonomy. But it's still early days for both open source software and blockchain. Much remains to be done to make this technology usable to grandma.

While we're waiting for viable (usable) new solutions, I think the first step is to lock down your cyber-life and help create demand by just getting rid of things like Facebook. You don't have to completely unplug from everything; you have to be hardcore or extreme about your privacy (although I think that's a good idea). You can do what you can, what you're able to do.

I won't blame or think ill of you if you stay on Facebook. I'm just trying to explain why I'm leaving. And I guess I am encouraging you to really start boning up on digital hygiene.

Below, I'm going to link to a series of relevant blog posts that you can explore if you want to follow me out, or just to start thinking more about this stuff.

Also, I hope you'll subscribe yourself to my personal mailing list, which I'll start using more regularly tomorrow. By the way, if you might be interested in some other, more specialized list that I might start based on my interests (such as Everipedia, education, libertarianism, or whatever), please join the big list.

Also note, especially if your email is from Gmail, you will have to check your spam folder for the confirmation mail, if you want to be added. Please move any mails from me and my list out of your spam (or junk) folder into your inbox so Google learns I'm actually not a spammer. :-)


There, that's me being "terse."


How deep should one go into this privacy stuff, anyway?

Probably deeper than you thought. Here's why.

If you are convinced that privacy actually matters, and you really want to lock down your cyber-life, as I am trying to do, there are easy options, like switching to Brave (or Firefox with plugins that harden it for privacy). I've done that. Then there are more challenging but doable options, like switching your email away from Gmail. I've done that. Then there are the hardcore options, like permanently quitting Facebook. I will be doing that later this month.

And then, finally, there are some extreme, weird, bizarre, and even self-destructive options, like completely unplugging—or, less extremely, plunking down significant sums of money on privacy hardware that may or may not work—or that works, but costs a lot. As an illustrative example, we can think about the wonderfully well-meaning company Purism and its charmingly privacy-obsessed products, the Librem 13 and 15 laptops as well as the Librem 5 phone, which is due in April "Q3".

I'm going to use this as an example of the hardcore level, then I'm going to go back to the more interesting broader questions. You can skip the next section if it totally bores you.

Should I take financial risks to support the cause of privacy?

If I sound a little skeptical, it's because I am. Purism is a good example because, on the one hand, it's totally devoted to privacy and 100% open source (OSS), concepts that I love. (By the way, I have absolutely no relationship with them. I haven't even purchased one of their products yet.) Privacy and open source go together like hand in glove, by the way, because developers of OSS avoid adding privacy-violating features. OSS developers tend to be privacy fiends, not least because free software projects offer few incentives to sell your data, while having many incentives to keep it secure. But, as much as I love open source software (like Linux, Ubuntu, Apache, and LibreOffice, to take a few examples) and open content (like Wikipedia and Everipedia), not to mention the promise of open hardware, the quality of such open and free projects can be uneven.

The well-known lack of polish on OSS is mainly because whether a coding or editorial problem is fixed depends on self-directed volunteers. It often helps when a for-profit enterprise gets involved to push things forward decisively (like Everipedia redesigning wiki software and putting Wikipedia's content on the blockchain). Similarly, to be sure, we wouldn't have a prayer of seeing a mass-produced Linux phone without companies like Purism. The company behind Ubuntu, Canonical, tried and failed to make an Ubuntu phone. If they had succeeded, I might own one now.

So there is an interesting dilemma here, I think. On the one hand, I want to support companies like Purism, because they're doing really important work. The world desperately needs a choice other than Apple and Android, and not just any other choice—a choice that respects our privacy and autonomy (or, as the OSS community likes to say, our freedom). On the other hand, if you want to use a Linux phone daily for mission-critical business stuff, then the Librem 5 phone isn't quite ready for you yet.

My point here isn't about the phone (but I do hope they succeed). My point is that our world in 2019 is not made for privacy. You have to change your habits significantly, switch vendors and accounts, accept new expenses, and maybe even take some risks, if you go beyond "hardcore" levels of privacy.

Is it worth it? Maybe you think being even just "hardcore" about privacy isn't worth it. How deep should one go into this privacy stuff, anyway? In the rest of this post, I'll explore this timely issue.

The four levels

I've already written in this blog about why privacy is important. But what I haven't explored is the question of how important it is. It's very important, to be sure, but you can make changes that are more or less difficult. What level of difficulty should you accept: easy, challenging, hardcore, or extreme?

Each of these levels of difficulty, I think, naturally goes with a certain attitude toward privacy. What level are you at now? Have a look:

  1. The easy level. You want to make it a bit harder for hackers to do damage to your devices, your data, your reputation, or your credit. The idea here is that just as it would be irresponsible to leave your door unlocked if you live in a crime-ridden neighborhood, it's irresponsible to use weak passwords and other such things. You'll install a firewall (or, rather, let commercial software do this for you) and virus protection software.—If you stop there, you really don't care if corporations or the government spies on you, at the end of the day. Targeted ads might be annoying, but they're tolerable, you think, and you have nothing to hide from the government. This level is better than nothing, but it's also quite irresponsible, in my opinion. Most people are at this level (at best). The fact that this attitude is so widespread is what has allowed corporations, governments, and criminals to get their claws into us.
  2. The challenging but doable level. You understand that hackers can actually ruin your life, and, in scary, unpredictable circumstances, a rogue corporation or a government could, as well. As unlikely as this might be, we are right to take extra precautions to avoid the worst. Corporate and government intrusions into privacy royally piss you off, and you're ready to do something reasonably dramatic (such as switch away from Gmail), to send a message and make yourself feel better. But you know you'll never wholly escape the clutches of your evil corporate and government overlords. You don't like this at all, but you're "realistic"; you can't escape the system, and you're mostly resigned to it. You just want the real abusers held to account. Maybe government regulation is the solution.—This level is better than nothing. This is the level of the Establishment types who want the government to "do something" about Facebooks abuses, but are only a little bothered by the NSA. I think this level is still irresponsible. If you're ultimately OK with sending your data to Google and Facebook, and you trust the NSA, you're still one of the sheeple who are allowing them to take over the world.
  3. The hardcore level. Now things get interesting. Your eyes have been opened. You know Google and Facebook aren't going to stop. Why would they? They like being social engineers. They want to control who you vote for. They're unapologetic about inserting you and your data into a vast corporate machine. Similarly, you know that governments will collect more of your data in the future, not less, and sooner or later, some of those governments will use the data for truly scary and oppressive social control, just as China is doing. If you're at this level, it's not just because you want to protect your data from criminals. It's because you firmly believe that technology has developed especially over the last 15 years without sufficient privacy controls built in. You demand that those controls be built in now, because otherwise, huge corporations and the largest, most powerful governments in history can monitor us 24/7, wherever we are. This can't end well. We need to completely change the Internet and how it operates.—The hardcore level is not just political, it's fundamentally opposed to the systems that have developed. This is why you won't just complain about Facebook, you'll quit Facebook, because you know that if you don't, you're participating in what what is, in the end, a simply evil system. In other ways, you're ready to lock down your cyber-life systematically. You know what a VPN is and you use one. You would laugh at the idea of using Dropbox. You know you'll have to work pretty hard at this. It's only a matter of how much you can accomplish.
  4. The extreme level. The hardcore level isn't hardcore enough. Of course corporations and governments are using your data to monitor and control you in a thousand big and small ways. This is one of the most important problems of our time. You will go out of your way, on principle and so that you can help advance the technology, to help lock down everybody's data. Of course you use Linux. Probably, you're a computer programmer or some other techie, so you can figure out how to make the bleeding edge privacy software and hardware work. Maybe you help develop it.—The extreme level is beyond merely political. It's not just one cause among many. You live with tech all the time and you demand that every bit of your tech respect your privacy and autonomy; that should be the default mode. You've tried and maybe use several VPNs. You run your own servers for privacy purposes. You use precious little proprietary software, which you find positively offensive. You're already doing everything you can to make that how you interact with technology.

In sum, privacy is can be viewed primarily as a matter of personal safety with no big demands on your time, as a political side-issue that demands only a little of your time, as an important political principle that places fairly serious demands on your time, or as a political principle that is so important that it guides all of your technical choices.

What should be your level of privacy commitment?

Let's get clear, now. I, for example, have made quite a few changes that show something like hardcore commitment. I switched to Linux, replaced Gmail, Chrome, and Google Search, and am mostly quitting privacy-invasive social media. I even use a VPN. The reason I'm making these changes isn't that I feel personally threatened by Microsoft, Apple, Google, and Facebook. It's not about me and my data; I'm not paranoid. It's about a much bigger, systemic threat. It's a threat to all of us, because we have given so much power to corporations and governments in the form of easily collectible data that they control. It really is true that knowledge is power, and that is why these organizations are learning as much about us as they can.

There's more to it than that. If you're not willing to go beyond moderately challenging changes, you're probably saying, "But Larry, why should I be so passionate about...data? Isn't that kind of, you know, wonky and weird? Seems like a waste of time."

Look. The digital giants in both the private and public sectors are not just collecting our data. By collecting our data, they're collectivizing us. If you want to understand the problem, think about that. Maybe you hate how stuff you talked about on Facebook or Gmail, or that you searched for on Google or Amazon, suddenly seem to be reflected by weirdly appropriate ads everywhere. Advertisers and Big Tech are, naturally, trying to influence you; they're able to do so because you've agreed to give your data to companies that aggregate it and sell it to advertisers. Maybe you think Russia was able to influence U.S. elections. How would that have been possible, if a huge percentage of the American public were not part of one centralized system, Facebook? Maybe you think Facebook, YouTube, Twitter, and others are outrageously biased and are censoring people for their politics. That's possible only because we've let those companies manage our data, and we must use their proprietary protocols if we want to use it. Maybe you're concerned about China hacking and crippling U.S. computers. A big part of the problem is that good security practices have been undermined by lax privacy practices.

In every case, the problem ultimately is we don't care enough about privacy. We've been far too willing to place control of our data in the hands of the tech giants who are only too happy to take it off our hands, in exchange for "services."

Oh, we're serviced, all right.

In these and many, many more cases, the root problem is that we don't hold the keys—they do. Our obligation, therefore, is to take back the keys.

Fortunately, we are still able to. We can create demand for better systems that respect our privacy. We don't have to use Facebook, for example. We can leave en masse, creating a demand for a decentralized system where we each own and control how our data is distributed, and the terms on which we see other people's data. We don't have to leave these important decisions in the hands of creeps like Mark Zuckerberg. We can use email, mailing lists, and newer, more privacy-respecting platforms.

To take another example, we don't have to use Microsoft or Apple to run our computers. While Apple is probably better, it's still bad; it still places many important decisions in the hands of one giant, powerful company, that will ultimately control (and pass along) our data under confusing terms that we must agree to if we are to use their products. Because their software is proprietary and closed-source, when we use their hardware and services, we simply have to trust that what happens to it after we submit it will be managed to our benefit.

Instead of these top-down, controlling systems, we could be using Linux, which is much, much better than it was 15 years ago.

By the way, here's something that ought to piss you off: smart phones are the one essential 21st-century technology where you have no free, privacy-respecting option. It's Apple or Google (or Microsoft, with its moribund Windows Phone). There still isn't a Linux phone. So wish Purism luck!

We all have different political principles and priorities, of course. I personally am not sure where privacy stacks up, precisely, against the many, many other principles there are.

One thing is very clear to me: privacy is surprisingly important, and more important than most people think it is. It isn't yet another special, narrow issue like euthanasia, gun control, or the national debt. It is broader than those. Its conceptual cousins are broad principles like freedom and justice. This is because privacy touches every aspect of information. Digital information has increasingly become, in the last 30 years, the very lifeblood of so much of our modern existence: commerce, socialization, politics, education, entertainment, and more. Whoever controls these things controls the world.

That, then, is the point. We should care about privacy a lot—we should be hardcore if not extreme about it—because we care about who controls us, and we want to retain control over ourselves. If you want to remain a democracy, if you don't want society itself to become an appendage of massive corporate and government mechanisms, by far the most powerful institutions in history, then you need to start caring about privacy. That's how important it is.

Privacy doesn't mainly have to do with hiding our dirty secrets from neighbors and the law. It mainly has to do with whether we must ask anyone's permission to communicate, publish, support, oppose, purchase, compensate, save, retrieve, and more. It also has to do with whether we control the conditions under which others can access our information, including information about us. Do we dictate the terms under which others can use all this information that makes up so much of life today, or does some central authority do that for us?

Whoever controls our information controls those parts of our lives that are touched by information. The more of our information is in their hands, the more control they have over us. It's not about secrecy; it's about autonomy.


Part of a series on how I'm locking down my cyber-life.


Why I quit Quora and Medium for good

It's not a temporary rage-quit; I've deleted both accounts. I have zero followers, no content, and no username. I'm outta there.

This is going to be more interesting than it sounds, I promise.

When I first joined Quora in 2011, I loved it, with a few small reservations. Then, after some run-ins with what I regarded as unreasonable moderation, I started to dislike it; I even temporarily quit in 2015. Then the events of 2018 gave me a new perspective on social media in general. I re-evaluated Quora again, and found it wanting. So I deleted my account today, for good. All my followers and articles are gone.

I went through a similar process with Medium two weeks ago.

Why? Glad you asked.

Digital sharecropping

Until maybe 2012 or so, if you had asked me, I would have said that I am a confirmed and fairly strict open source/open content/open data guy, and the idea of people happily developing content, without a financial or ownership stake, to benefit a for-profit enterprise had always bothered me. It bothered me in 2000 when Jimmy Wales said the job he hired me for—to start a new encyclopedia—would involve asking volunteers to developed free content hosted by a for-profit company (Bomis). I was happy when, in 2003, the Bomis principals gave Wikipedia to a non-profit.

(Ironically, not to mention stupidly, in 2011 Jimmy Wales tried to blame me for Bomis' original for-profit, ad-based business model. Unfortunately for his lie, I was able to find evidence that, in fact, it had been his idea.)

In 2006, technology journalist Nicholas Carr coined the phrase "digital sharecropping", saying that "Web 2.0,"

by putting the means of production into the hands of the masses but withholding from those same masses any ownership over the product of their work, provides an incredibly efficient mechanism to harvest the economic value of the free labor provided by the very many and concentrate it into the hands of the very few.

This bothers me. I'm a libertarian and I support capitalism, but the moral recommendability of building a business on the shoulders of well-meaning volunteers and people merely looking to socialize online struck me, as it did Carr, as very questionable. I even remember writing an old blog post (can't find it anymore) in which I argued, only half-seriously, that this practice is really indefensible, particularly if users don't have a governance stake.

The moral recommendability of building a business on the shoulders of well-meaning volunteers and people merely looking to socialize online struck me as very questionable.

The rise of social media, and joining Quora and Medium

By 2010, despite having been an active Internet user for over 15 years, my perspective started changing. I didn't really begrudge Facebook, Twitter, or YouTube their profits anymore. The old argument that they are providing a useful service that deserves compensation—while still a bit questionable to me—made some sense. As to the rather obvious privacy worries, at that stage they were mainly just worries. Sure, I knew (as we all did) that we were trusting Facebook with relatively sensitive data. I was willing to give them the benefit of the doubt. (That sure changed.)

If you were plugged in back then, you regularly joined new communities that seemed interesting and happening. Quora was one; I joined it in 2011. It struck me as a somewhat modernized version of the old discussion communities we had in the 1990s—Usenet and mailing lists—but, in some ways, even better. There was very lightweight moderation, which actually seemed to work. A few years later I joined Medium, and as with Quora, I don't think I ever heard from their moderators in the first few years. If I did, I was willing to admit that maybe I had put a toe over the line.

Within a few days, Quora actually posted a question for me to answer: "What does Larry Sanger think about Quora?" Here is my answer in full (which I've deleted from Quora along with all my other answers):

Uhh...I didn't ask this.  It's a bit like fishing for compliments, eh Quora team? But that's OK, I am happy to compliment Quora on making a very interesting, engaging website.

Quora is pretty interesting. It appeals to me because there are a lot of people here earnestly reflecting--this I think must be partly due to good habits started by the first participants, but also because the question + multiple competing answers that mostly do not respond to each other means there is more opportunity for straightforward reflection and less for the usual bickering that happens in most Internet communities.

A long time ago (I'm sure one could find this online somewhere, if one looked hard enough) I was musing that it's odd that mailing lists are not used in more ways than they are. It seemed to me that one could use mailing list software to play all sorts of "conversation games," and I didn't know why people didn't set up different sorts of rule systems for different kinds of games.

What impresses me about Quora is that it seems to be a completely new species of conversation game.  Perhaps it's not entirely new, because it's somewhat similar to Yahoo! Answers, but there aren't as many yahoos on Quora, for whatever reason, and other differences are important.  Quora's model simply works better.  Quora users care about quality, and being deep, and Yahoo! Answerers generally do not.  I wonder why that is.

But unlike Yahoo! Answers, Quora doesn't seem to be used very much for getting factual information. Quora users are more interested in opinionizing about broad, often philosophical questions, which I find charming and refreshing. But for this reason, it's not really a competitor of Wikipedia or Yahoo! Answers (or Citizendium...). It's competing with forums.

I think it needs some more organizational tools, tools that make it less likely that good questions and answers aren't simply forgotten or lost track of. Or maybe there already are such tools and I don't know about them.

As I re-read this, some points have taken on a new meaning. I chalked up Quora's failure to provide more robust search tools to it being at a relatively early stage (it was started in two years earlier by a former Facebook CTO), and the ordinary sort of founder stubbornness, in which the founders have a vision of how a web app should work, and as a result don't give the people what they actually want. I see now that they had already started to execute a new approach to running a website that I just didn't recognize at the time. It was (and is) very deliberately heavy-handed and top-down, like Facebook. They let you see what they want you to see. They try to "tailor" the user experience. And clearly, they do this not to satisfy explicit user preferences. They don't care much about user autonomy. Their aim is apparently to keep users on the site, to keep them adding content. If you choose to join, you become a part of their well-oiled, centrally managed machine.

Quora and Medium, like Facebook, Twitter, and YouTube, make it really hard for you to use their sites on your own terms, with your own preferences. You're led by the hand and kept inside the rails. Before around 2008, nobody could imagine making a website like that. Well, they existed, but they were for children and corporations.

I could see this, of course. But all the big social media sites were the same way. I guess I tolerated what looked like an inevitable takeover of the once-decentralized Internet by a more corporate mindset. I suppose I hoped that this mindset wouldn't simply ruin things. By 2012, I was already deeply suspicious of how things were turning out.

But now it's just blindingly obvious to me that the Silicon Valley elite have ruined the Internet.

Increasingly heavy-handed and ideological "moderation"

Maybe the first or second times I heard from Quora's moderation team, I was merely annoyed, but I still respected their attempts to keep everything polite. I thought that was probably all it was. That's what moderation used to be, anyway, back when we did it in the 90s and 00s. But I noticed that Quora's moderation was done in-house. That struck me as being, well, a little funny. There was something definitely off about it. Why didn't they set some rules and set up a fair system in which the community effectively self-moderated? They obviously had decent coders and designers who could craft a good community moderation system. But they didn't...

I see now only too well that the reason was that they wanted moderation to be kept in house, and not just because it was important to get right; it was because they wanted to exert editorial control. At first, it seemed that they had business reasons for this, which I thought was OK, maybe. But as time went on and as I got more moderation notices for perfectly fair questions and polite comments, it became clear that Quora's moderation practices weren't guided merely by the desire to keep the community pleasant for a wide cross-section of contributors. They were clearly enforcing ideological conformity. This got steadily worse and worse, in my experience, until I temporarily quit Quora in 2015, and I never did contribute as much after that.

Similarly, Medium's moderators rarely if ever bothered me, until they took down a rather harsh comment I made to a pedophile who was defending pedophilia. (He was complaining about an article I wrote explaining why pedophilia is wrong. I also wrote an article about why murder is wrong.) I hadn't been sufficiently polite to the pedophile, it seems. So, with only the slenderest explanations, Medium simply removed my comment. That's what caused me to delete my Medium account.

They don't care much about user autonomy. Their aim is apparently to keep users on the site, to keep them adding content. If you choose to join, you become a part of their well-oiled, centrally managed machine.

You don't have to agree with my politics to agree that there is a problem here. My objection is not just about fairness; it's about control. It's about the audacity of a company, which is profiting from my unpaid content, also presuming to control me, and often without explaining their rather stupid decisions. It's also not about the necessity of moderation. I've been a moderator many times in the last 25 years, and frankly, Internet communities suck if they don't have some sort of moderation mechanism. But when they start moderating in what seems to be an arbitrary and ideological way, when it's done in-house in a wholly opaque way, that's just not right. Bad moderation used to kill groups. People would leave badly-moderated groups in droves.

Lack of intellectual diversity in the community

Being on the web and not artificially restricted by nationality, Quora and Medium do, of course, a global user base. But they are single communities. And they're huge; they're both in the top 250. So whatever answer most users vote up (as filtered by Quora's secret and ever-changing sorting algorithm), and whoever is most popular with other Quora voters, tends to be shown higher.

Unsurprisingly—this was plainly evident back in 2011—Quora's community is left-leaning. Medium is similar. That's because, on average, intellectual Internet writers are left-leaning. I didn't really have a problem with that, and I wouldn't still, if we hadn't gotten absolutely stunning and clear evidence in 2018 that multiple large Internet corporations openly and unashamedly use their platforms to put their thumbs on the scales. They simply can't be trusted as fair, unbiased moderators, particularly when their answer ranking algorithms and the moderation policies and practices are so opaque.

In addition, a company like Quora should notice that different cultures have totally different ways of answering life's big questions. The differences are fascinating, too. By lumping us all together, regardless of nationality, religion, politics, gender, and other features, we actually miss out on the full variety of human experience. If the Quora community's dominant views aren't copacetic to you, you'll mostly find yourself in the cold, badly represented and hard to find.

Silicon Valley, your experiment is over

Look. Quora, like Medium, Facebook, Twitter, YouTube, and others, have been outed as shamelessly self-dealing corporations. It's gone way beyond "digital sharecropping." The problem I and many others have with these companies isn't just that they are profiting from our unpaid contributions. It's that they have become ridiculously arrogant and think they can attempt to control and restrict our user experience and our right to speak our minds under fair, reasonable, and transparent moderation systems. And while the privacy issues that Quora or Medium have aren't as profound as for Facebook, they are there, and they come from the same controlling corporate mindset.

So that's why I've quit Quora and Medium for good. I hope that also sheds more light on why I'm leaving Facebook and changing how I use Twitter.

As if to confirm me in my decision, Quora doesn't supply any tools for exporting all your answers from the site. You have to use third-party tools (I used this). And after I deleted my account (which I did just now), I noticed that my account page and all my answers were still there. The bastards force you to accept a two-week "grace period," in case you change your mind. What if I don't want them to show my content anymore, now? Too bad. You have to let them continue to earn money from your content for two more weeks.

Clearly, they aren't serving you; you're serving them.

We've been in an experiment. Many of us were willing to let Internet communities be centralized in the hands of big Silicon Valley corporations. Maybe it'll be OK, we thought. Maybe the concentration of money and power will result in some really cool new stuff that the older, more decentralized Internet couldn't deliver. Maybe they won't mess it up, and try to exert too much control, and abuse our privacy. Sure! Maybe!

The experiment was a failure. We can't trust big companies, working for their own profit, to make good decisions for large, online communities. The entire industry has earned and richly deserves our distrust and indignation.

So, back to the drawing board. Maybe we'll do better with the next, more robustly decentralized and democratic phase of the Internet: blockchain.

We'll get this right eventually, or die trying. After all, it might take a while.

We've been in an experiment. Many of us were willing to let Internet communities be centralized in the hands of big Silicon Valley corporations. Maybe it'll be OK, we thought. ...

The experiment was a failure.


A plea for protocols

The antidote to the abuses of big tech is the very thing that gave birth to the Internet itself: decentralized, neutral technical protocols.

  1. The thought that inspires
    my work.
    Ever since I started
    work on Nupedia and then Wikipedia, a thought has always
    inspired me: just imagine the stunning possibilities when people
    come together as individuals to share their knowledge, to create
    something much greater than any of them could achieve individually.

  2. The sharing economy. There
    is a general phrase describing this sort of laudable activity: the
    “sharing economy.” The motivations and rewards are different
    when we work to benefit everyone indiscriminately. It worked well
    when Linux and OSS were first developed; then it worked just as well
    with Wikipedia.

  3. The Internet itself is an
    instance of the sharing economy.
    The Internet—its ease of
    communication and publishing together with its decentralized
    nature—is precisely what has made this possible. The Internet is a
    decentralized network of people working together freely, for mutual
    benefit.

  4. The Internet giants have
    abused the sharing economy.
    About ten years ago, this all
    started to change. More and more our sharing behavior has been
    diverted into massive private networks, like Facebook, Twitter, and
    YouTube, that have exerted control and treated contributors as the
    product.

  5. Facebook’s contempt for
    our privacy.
    All you want to do is easily share a picture with
    your family. At first, we thought Facebook’s handling of our
    private data would just be the price we had pay for a really
    powerful and useful service. But over and over, Facebook has shown
    utter contempt for our privacy, and it has recently started
    censoring more and more groups based on their viewpoints. We don’t
    know where this will end.

  6. This aggression will not
    stand, man
    . We need to learn from the success of
    decentralized projects like Linux, open source software, Wikipedia,
    and the neutral technical protocols that define the Internet itself,
    that we don’t have to subject
    ourselves to the tender mercies of the Internet giants.

  7. How.
    How? Just
    think. The Internet is made up of a network of computers that work
    according to communication rules that they have all agreed on. These
    communication rules are called protocols and
    standards.

  8. Protocols
    and standards...
    There
    are protocols and standards
    for transferring
    and displaying
    web pages, for email, for transferring files, and for all the many
    different technologies
    involved.

  9. ...which
    are
    neutral.These
    different standards are neutral. They explicitly don’t care what
    sort of content they carry, and they don’t benefit any person or
    group over another.

  10. We need more
    knowledge-sharing protocols.
    So here’s the thought I want to
    leave you with. You evidently support knowledge sharing, since
    you’re giving people awards for it. Knowledge sharing is so easy
    online precisely because of those neutral technical protocols.
    So—why don’t we invent many, many more neutral Internet
    protocols for the sharing of knowledge?

  11. Blockchain is awesome
    because it creates new technical protocols.
    Probably the biggest
    reason people are excited about blockchain is that it is a
    technology and a movement that gets rid of the need of the Internet
    giants. Blockchain is basically a technology that enables us to
    invent lots and lots of different protocols, for pretty much
    everything.

  12. Why
    not Twitter- and Facebook-like protocols?
    There
    can, and should, be a protocol for
    tweeting without Twitter.
    Why should we have to rely on one company and one website when we
    want to broadcast short messages to the world? That should be
    possible without
    Twitter. Similarly, when we want to share various other tidbits of
    personal information, we should be able to agree on a protocol to
    share
    that ourselves, under our
    own terms—without
    Facebook.

  13. Wikipedia centralizes,
    too.
    Although Wikipedia is an example of decentralized editing,
    it is still centralized in an important way. If you want to
    contribute to the world’s biggest collection of encyclopedia
    articles, you have no choice but to collaborate with, and negotiate
    with, Wikipedians. What if you can single-handedly write a better
    article than Wikipedia’s? Wikipedia offers you no way to get your
    work in front of its readers.

  14. Everipedia,
    an encyclopedia protocol.
    Again,
    there should be a neutral encyclopedia protocol,
    one that allows us to add
    encyclopedia articles
    to a shared database that its creators own and develop, just like
    the Internet itself. That’s why I’m working on Everipedia, which
    is building a blockchain encyclopedia.

This is a little speech I gave to the Rotary Club of Pasadena, in the beautiful Pasadena University Club, January 31, 2019.


We need to pay more for journalism. A lot more.

I'm going to say a few obvious things, and then then a few unobvious things, about the business model for news publishing.

Obvious thing #1: One of the most consequential facts of the Internet age is that news content has become free of charge. We all watched in morbid fascination in the 1990s and 00s when news came out from behind paywalls. What will this do to the business model? we wondered. How will news publishers survive and flourish?

Obvious thing #2: None of them flourished, and many didn't survive. One of the worst industries to get into these days is journalism. Major news organizations have never stopped hemorrhaging jobs. I feel sorry for my journalist friends, and I'm glad there are some who still have jobs. There are quite a few desperate journalists out there; I don't blame them.

Obvious thing #3: There are two main business models for news publishing: advertising and subscription. I'm not familiar with the statistics, but it seems obvious that most news that is read is supported by advertising. Note, I don't say that most money that is made, or the best news available, comes from advertising. I'm just saying that if you add up all the news pageviews supported by ads, and compare it to the news pageviews supported by subscriptions, you'd find a lot more of the former.

I'm done boring you with the obvious. Now something perhaps a little less obvious: Desperate journalists, whose jobs depend on sheer pageviews because that's how you pay the bills, are desperate to write clickbait. Standards have gone out the window because standards don't pay the bills. Objectivity and fact-checking are undervalued; speed and dramatic flair are "better" because they drive traffic and save jobs. But even this is pretty much just the conventional wisdom about what's going on in journalism. It's very sad.

As long as the business of journalism is paid for by ads, it won't be journalism.

It will be clickbait.

If you look at the line of reasoning above, however, you might notice something remarkable. At least, it struck me. It is the simple fact that the news is free of charge that led almost inevitably to a decline in standards. This lowering of standards has even affected more serious reporting that can only be found behind paywalls, in my opinion.

I remember keynoting a publishers' conference in 2007, and many people were asking: "The Internet is threatening our business models. How do we solve this problem?" I suppose they thought I'd have a bright idea because I had managed to build something interesting on a shoestring; but I didn't have any. Since then, as far as I can see, news publishing hasn't gotten any farther along. I haven't had or encountered any fantastic new ideas for getting journalists paid to do excellent work.

As long as the business of journalism is paid for by ads, it won't be journalism.

It will be clickbait.

If you want to support real journalism, with real standards, consider subscribing to a publication that you think practices it, or comes as close to it as possible. It's on us, the public.

But that's lame. You thought I was going to stop there? If so, you don't really know me. Journalism never was very good. Standards have dropped, that's for sure; but we should look back and recognize that they never were terribly high in the first place. What we really need are journalists who recognize just how elusive the entire, nuanced truth really is. (Maybe require them to have had a few philosophy courses.) And we need publishers who demand not just good traditional journalism but neutrality, in the sense I defined in an essay ("Why Neutrality?"):

A disputed topic is treated neutrally if each viewpoint about it is not asserted but rather presented (1) as sympathetically as possible, bearing in mind that other, competing views must be represented as well, and (2) with an equitable amount of space being allotted to each, whatever that might be.

This standard, it turns out (as laid out in my paper), is pretty hard-core. But following it would solve many of the problems we've had. The extra work meeting such a high standard would cost more to produce. But I think enough people care enough about their own intellectual autonomy that they would pay a significant premium for truly neutral news reporting with unusually high standards, above and beyond the New York Times or the Wall Street Journal.

I know I would.


How to stop using social media

Updated January 28, 2019.

It's no longer a matter of whether—it's a matter of how.

It's sad, but for social media addicts, quitting seems to require a strategy. By now, some of us who have tried and failed know that it is simply unrealistic to say, "I'm going to quit social media," and then just do it. There are reasons we got into it and why it exerts its pull. We must come to grips with those reasons and see what—if anything—we can do to mitigate them.

Why we participate in social media, and why we shouldn't

We participate in social media because we love it; but we want to quit, because we also hate it.

Why we love social media

  1. Social visibility. Active users of social media want social visibility. We want to be understood. We want to be connected with others who understand us, respect us, or like us.
  2. Staying plugged in. So much of social and political life seems to have moved onto social media, we simply won't know what's going on if we quit.
  3. Political influence. Unless we have entirely given up on political participation, we want to "have a voice," to play the game of politics.
  4. Ambition and narcissism. Quite apart from 1 and 2, we are drawn to platforms particularly like Twitter and LinkedIn because we think these accounts will advance our careers. We follow and are followed by Important People, we stay in touch with them. This is where valuable connections and deals can be made.
  5. Staying connected to family and friends. Golly, your family and friends are on Facebook. You really do have fun with them. How could you give it up, even if you wanted to? You don't want to miss out, of course.

We've tasted the forbidden fruit. We surely aren't giving up the clear advantages that social media offer. That ain't gonna happen.

The fear of missing out—that lies at the root of all five reasons. If you leave any of the networks, you just won't be seen. It'll be like you're invisible. If you leave Twitter, you won't really know what's going on in the world's most influential news and opinion network, and you will be leaving the field wide open to your political enemies. If you leave Twitter and LinkedIn, your career might take a blow; how could you possibly justify just giving up all those followers you worked so hard to get? And if you leave Facebook, you might be cutting yourself off from your family and friends—how could you do such a thing?

So, look. We've tasted the forbidden fruit. We surely aren't giving up the clear advantages that social media offer. That ain't gonna happen.

And yet, and yet. There are reasons we should stop participating in the current configuration of social media. I've written at some length in this blog about those reasons, as follows.

Why we hate social media:

  • We're giving up our privacy and autonomy: By leaving the management of our online social presence in the hands of giant, privacy-disrespecting corporations, our information, even our digital lives, becomes theirs to sell, manipulate, and destroy. We must trust them with the security of our data, which is thrown in with that of billions of others. We must endure the indignities of their control, and the various little ways in which we lose our autonomy because we are part of a giant, well-oiled machine that they run. This is dehumanizing.
  • We're irrationally wasting time: Like most mass-produced, mass-marketed entertainment, social media is mostly crap. Too many of us are basically addicted to it; our continued participation, at least the way we have been doing so, is simply irrational.
  • We're complicit in the dumbing-down and radicalization of society (see also 1, 2, 3, 4). Nick Carr famously said in 2008 that Google is making us stupid. Since then, social media systems have blown up and have made us even dumber. Their key features are responsible for things like (especially) artificially shortened statements of opinion and reflection, having to take special actions to write more than one paragraph, all-or-nothing "upvoting" and "downvoting," and letting posts fall into a hard-to-search memory hole.

What a horrible conundrum. On the one hand, we have terrifically compelling reasons to join and stay connected to social media. On the other hand, doing so shows contempt for our own privacy, autonomy, and rationality, and undermines the intelligence and toleration needed to make democracy work. It is as if the heavy, compelling hand of corporate-driven collectivization is pushing us toward an increasingly totalitarian society.

So what's the solution? Is there a solution?

Non-solutions

Let's talk about a few things that aren't solutions.

You can't just quit cold turkey, not without a plan. If you've been hooked and you try, you'll probably come crawling back, as I have a few times. I'm not saying nobody has ever done so; of course they have. But so many people who say they're giving up or restricting social media do end up coming back, because the draws are tremendous, and the addicts aren't getting their fix elsewhere.

You can't expect "alt-tech" to satisfy you, either. This would include things like Gab.ai instead of Twitter or Facebook, just for example; other examples would include Voat instead of Reddit, BitChute instead of YouTube, Minds instead of Facebook, and the Mastodon network instead of Twitter. For one thing, some (not all) of the alternatives have been flooded by loud, persistent racist/fascist types, or maybe they're just people paid by the tech giants to play-act such types on those platforms. More to the point, though, such sites don't scratch the itches that Facebook and Twitter scratch. At best, they can appeal to your narcissism and provide some social visibility; but this isn't enough for most people. They're not happenin' (yet); they almost certainly won't help your career.

What about blockchain solutions? I, at least, am not satisfied to wait around for awesome crypto solutions, like Steemit, to grow large enough to challenge their main competitors (Medium, in that case). I mean, I probably will join them when more influential and widely-used decentralized platforms show up. The startup I joined a year ago, Everipedia, has plans to develop a platform for hosting a decentralized competitor of Quora. That's exciting. But I want to quit these damn networks now. I don't want to wait any longer.

Even if those are non-solutions, we do, at least, have the requirements for a solution: we want to secure the advantages of the first list above (1)‑(5) without falling prey to the disadvantages of second list (a)‑(c).

The advantages of social media—without social media?

Let's review (1)-(5). I think there may be ways to secure the advantages of privacy-stealing social media. I would really, really appreciate it if you have any other bright ideas about how to secure these advantages, because this is where the rubber meets the road; please share in the comments below.

  1. Social visibility without social media. Social visibility is probably the easiest thing to secure online. If you just want to connect with others and feel heard, there are lots of ways you can do that. So I'm not going to worry too much about that one; I think it will probably take care of itself, if the other advantages are secured.
  2. Staying plugged in without social media. Staying plugged in, too, is very easy. You can simply consume more traditional media, for one thing. Another idea is that you could create throw-away accounts on Twitter or Facebook, for example, and follow the people you were following before. As long as you, yourself, don't actually participate, then you're still more or less as plugged-in as before. But one big disadvantage of that idea is that you might be tempted to get back in because it's just so darned easy to interact with friends and family on Facebook, and to call out or refute the benighted on Twitter. But if you don't use Facebook at all, even to read, you can always stay in touch via email, especially if you use old-fashioned cc email groups or email lists, and you use it, and you manage to get your friends to use it. If you get into the habit, I think they'll get into the habit, too. It is mainly just a matter of habit.
  3. Political influence without social media. Twitter plays an almost unique role in our political discourse, and there is no way to make up the influence you'd have over that community, if you leave it. The question, however, is whether your participation on Twitter really does have that much influence. If it does, then you probably have other ways to get the word out. I have 3,000 followers, which despite being a high percentile but not especially influential. I could throw that away without much hand-wringing. After all, I could easily put in the same amount of time on my blog, or on mailing lists (i.e., listservs), or writing for publication (which I might do more of, but it's kind of a pain in the ass), and I think I might ultimately have more influence, not less. But more on this further down (you can use Twitter in a particular way that I think is OK).
  4. Ambition and narcissism without social media. I don't mean to say that narcissism is a good thing, mind you. I hope I don't much too care about securing the ability to preen more effectively in public. But I have gained a reasonable professional following on Twitter and LinkedIn, and a smaller one on Facebook (mostly because I've mostly used it for actual friends and family). When I came back from my September-October 2018 social media break, I told folks it was because of professional obligations. I thought I would Tweet less, and only about career stuff. But I wasn't serious enough. I was sucked into all the rest of it, too. I can only hope I'd be able to resist the pull. And I can support my "personal brand" (really, my professional brand) via my blog, writing for publication, and perhaps a mailing list; the latter sounds like a good idea (expect an Everipedia email discussion list!). Another idea is to post to Twitter only via some service, and never, ever replying on-site, but instead telling people to look for my replies on my blog.
  5. Staying connected to family and friends without social media. This also strikes me as being particularly easy. I know that my family and my real friends will be happy to write to me by email if I start writing to them, especially if I get into the habit of using email cc lists and maybe, again, mailing lists. We could also use other networks or sharing services that (say they) have more commitment to privacy and self-ownership.

So much for the suggestions. I haven't really discussed whether they're actually feasible qua solutions, so next I'll tackle that.

Evaluating the solutions

A lot of the solutions suggested so far might sound like "rolling back" to older technologies. There's something to that; but I'll also consider some other, privacy-respecting solutions. Besides, the older technologies are still very sound, and the newer social ones that have replaced them are obviously problematic in various ways.

Consuming more traditional media

Like many, as I started spending more time on Facebook, Twitter, and YouTube, I started spending less time consuming professionally-produced content. Maybe, the suggestion goes, we should just regard this as something of a mistake. Now, don't get me wrong; I'm a crowdsourcing guy at heart and I hold no brief for the merits of traditional media, especially mainstream media. But insofar as one of the purposes of social media is to clue us in to what's going on, news reports and good blogs can be used. They probably should be, too; when I started Infobitt in 2013, one thing that really struck me was how poorly informed we would be if we just looked at the stuff that came across our social media feeds. I discovered this when I helped to prepare news summaries daily. There were a lot of important news stories that we found that were not widely discussed in social media, or even in most of the mainstream media. You'll probably be better informed if you stop using social media to keep up with the news; of course, your mileage may vary.

Going back to email, cc lists, and listservs

There are many social functions that social media can do, that email and traditional email discussion lists can't, or not as easily. But many of these functions have turned out to be unimportant and not worth preserving.

  • Short public and semi-public back-and-forths. Facebook and Twitter both excel at a kind of communication that is pleasant and easy, usually banal, and rarely profound. If you're actively using these services and occasionally get into rapid-fire discussions about some controversial subject, ask yourself: Is anyone really improved by these exchanges? Again, they're fun. They're hard for me to resist, that's for sure. But when I take a step back and look at them, I have to admit that short messages might be good for marketing, but as a method of public discourse, they're an ultimately insidious and harmful. Advantage: email.
  • Registering instant support or other reaction. If you ask me, this is one of the more obnoxious features of social media, one that addicts us but for no good reason; it merely appeals to our petty egos. There's little useful information conveyed by the fact that a tweet or a post gets a lot of likes, and this also tends to make us "play to the crowd" instead of revealing our most authentic selves. Advantage: email.
  • Memes. They're possible on email, but there's more support for them on social media. They can be funny or rhetorically effective, but they're one of the things that is making us dumber and coarsening our discourse. They're better off gone. Advantage: email.
  • Sharing multimedia. It's true that pictures and especially videos are more difficult to pull off in email and even more so on listservs. Video is neat to share with friends. If I could trust Facebook, I'd be happy to share family videos with family and close friends—I've never been foolish enough to trust them that much. And email has nothing on YouTube. That's why I actually haven't shown my extended family many pictures in the last several years; regrettably, I got out of the habit of one-on-one sharing. Other (and perhaps ultimately superior) methods of sharing multimedia socially among those we trust might be necessary. Advantage: social media.

There are many social functions that social media can do, that email and traditional email discussion lists can't, or not as easily. But many of these functions have turned out to be unimportant and not worth preserving.

And here are the ways in which email, email cc lists, and listservs are perfectly fine, if not superior to social media:

  • Actually communicating personal news and opinion. The main and most important thing we do with Facebook is to share news and opinion. Email is perfect for this. It's a "push" notification in that people can't ignore it. But that pressures the sender to make sure the announcements really are important and aren't just cat pictures, or whatever. (Yes, I know some people love cat pictures. Mostly, though, they love sharing their own cat pictures.)
  • Long-form messages. As my friends know, I sometimes like to go on...and on...and on. This isn't a bad thing. Long-form text is a good thing, a necessary thing for actual intelligence. The ability to easily have our say at a length as great as we please means that those of us with more complex and voluminous thoughts on a subject won't feel we're doing something frowned-upon when we wax, er, eloquent.
  • Threading. Email (whether one-on-one, in small groups, or on a listserv) naturally comes in threads by subject. If you change the subject, you change the email subject line. Easy-peasy, and there was absolutely nothing wrong with it. As to side-threads, in a whole-group discussion, remember how we did this? We said, "Take it off-list, guys." Sometimes, we did. Sometimes, we recognized that it wasn't worth the bother. And a lot of times, those endless public exhibitions of rhetorical ping-pong really weren't worth the bother.

I'm not meaning to say that we must choose between email and social media, here. I'm saying that email (and listservs) can probably be considered a sound substitute for social media. There are other possible substitutes, too.

Blogs and traditional publishing

I've created a fair bit of value, I imagine, for Quora, Medium, Facebook, and even Twitter, with various long-form posts. I know that what I've written has given them well northwards of a million impressions over the years (I think several million), free of charge. I could have put those posts on my blog, or in some cases cleaned them up a little and submitted them to professionally published websites and magazines. Why did I end up spending so much time on Quora and Medium in particular? (By the way, as of this writing, I've saved my old Medium writings and I have deleted my Medium account. I will do the same with Quora soon.)

In the case of Quora, I joined because it looked like (and was, surely, and to some extent still is) a powerful and successful engine for extracting really interesting opinion and insight from some smart people. My problem with it is the same as the problem I've had with Medium. It's a multi-part problem. First, over the years, the platforms have grown greatly, each a single enormous global community. (Federated sub-communities a la Stack Exchange would be better.) Second, partly as a result of that, they have come to be increasingly dominated by the left. As my regular readers know, I'm a libertarian and an individualist, but all groupthink I find to be a turn-off, especially when my contrarianism is no longer tolerated. Third, the left has become increasingly censorious. I've found my sometimes prickly remarks, once accepted without comment, increasingly censored by "moderators" who rarely explain their often arbitrary-seeming decisions, unlike the more honest and polite older-style listserv moderators.

While censorship is part of the problem I have with these platforms, another part is the fact that I am writing to financially benefit people who set themselves up as my digital masters. This was acceptable to me for a while, as it has been to many of us—mostly, I suppose, because I think it might have gained me a larger and more active audience. In retrospect, however, I'm not so sure. I think that if I had simply stuck with my blog and had written as much there as on Quora and Medium, I would have ultimately had a larger and higher-quality audience.

If I have an important message that I really want to get out there, then I hope I'll try to get it traditionally published more often than I have been.

Will I ditch all social networks? What about alternative social networks?

The big exception will be Twitter; more on that in the next section.

There are some social networks I won't leave. One is Stack Overflow, the question and answer site for programmers. As far as I can tell, it really does seem to respect its audience and to be well-run. I might well be inspired to check out the other Stack Exchange sites. I'll stick around on Reddit for a while, too, at least for work-related stuff. It seems relatively OK.

Messaging services are generally OK—but that, of course, is because you're not the product. I hate Facebook, so I'll stick around on Messenger only as long as my work colleagues use it. I'll tell my friends and family to start using other services, like Slack or the awesome Telegram, if they want to message me. (Of course, good old text messaging is usually my favorite for people who have my phone number, but that's for things that demand an immediate reply.)

I certainly see no reason whatsoever to leave any of the web forums that I occasionally frequent. Web forums are still robust and have few of the problems listed here. I'll consider them over mailing lists, but I think mailing lists are a bit better for meaningful discussions.

I might well consider some alternative networks that respect privacy and practice decentralization more (I intend to study them more; see below). One is Mastodon; another is MeWe. I have great objection to such networks. The problem, as I said above, is that they don't scratch the itch. The root problem is that they don't have critical mass and I can't guarantee that my friends and acquaintances will follow me there. Email is different: everyone has it, everyone uses it.

Even quit Twitter?

After much soul-searching, I decided to keep using Twitter, but only following one strict rule about how I use it: I will not post, retweet, respond to, or like anything else, including my many pet topics, unless I'm promoting something I or a work colleague has written.

I'll just include a Twitter thread I posted:

https://twitter.com/lsanger/status/1089940575946723328

Do I merely want to roll back the clock?

Traditional media, email, listservs, and blogs: Are those really my answer to social media? Do I want to roll back the clock?

At this point, my honest answer is: Not really. I'm actually reluctant to leave social media, because what used to be called "Web 2.0" really does contain some useful inventions. The tweet is excellent for advertising and promotion. Multimedia sharing on YouTube, Facebook, and (if you use it much—I never did) Instagram is very convenient. The moderation engine on StackExchange sites is excellent. I might be able to get behind some variant on the general Facebook theme. I'm very sympathetic to some newer styles of social networks.

Centralization is what we got. That led directly to decisions that degraded our experience in the service of profits and political influence. The centralization of social media has proven to be a blind alley. It's time to turn around and find a new way forward.

It will prove to be the downfall of all of the older, soon-to-be-dying social media giants that, at root, they chose centralization over neutral protocols. They chose to concentrate power in the hands of corporate executives and bureaucracies. That is neither needed nor welcome for purposes of connecting us online; once we knew what we wanted, Internet protocols could have been invented to deliver them to us in a decentralized way. But that would have made the platforms much less profitable. Centralization is what we got. That led directly to decisions that degraded our experience in the service of profits and political influence. The centralization of social media has proven to be a blind alley. It's time to turn around and find a new way forward.

Do I want to stick with email and the rest forever? Of course not. I've had (and often proposed) all sorts of new technologies. I think we need decentralized versions of social media, in which we participate on our own terms and enjoy the benefits of ownership. That would bring me back.

But...but...but...what about...?

We've already discussed these things, but you didn't believe me the first time. Let's review:

  • What about my followers? If you have a certain number of followers on Twitter, you will probably have a following on most other services proportionate to your Twitter percentile. If you have thousands of followers on Twitter, chances are you could start an email discussion list and, particularly if you loudly announced over a period of some weeks that you are going to leave Twitter forever and delete your account on such-and-such a date, you'll get a fair number of your followers to join you on that mailing list. You might, perhaps, get them to follow you to another social network, but this is much more of a crapshoot, as far as I'm concerned. Again, everyone has email, but almost nobody is on whatever also-ran privacy-loving social network you're considering.
  • What about missing out on all the essential controversies that are going on on Twitter? Think now. How essential are they, really? Most of those conversations are merely pleasant, and frequently insipid, crappy, or vicious. You might as well wring your hands because you'll miss out of an important article in the New York Times because you don't read it cover-to-cover, or because you don't attend every professional conference in your field, or a zillion other venues. Of course you're missing out. You can't avoid missing out all sorts of things. Here's a liberating thought: you really aren't missing out on much that is really important, in the long run, if you leave Twitter (and Facebook). Your mileage may vary, but I'm pretty sure this is true for 95% of us. It's certainly true for me.
  • What if my family and friends stay on Facebook, and my work colleagues stay on Twitter, and... And what? Finish the thought. You can't, in any way that should give you pause. Share a picture? Look, you can and should start sharing pictures and videos privately. There are lots of ways (even fairly simple, automatic, and secure ways) to do that. Learn the latest gossip? Well, use email. Anyone close enough to have gossip you have any business caring about will be happy to chat one with you (and maybe an ad hoc group of your close friends) if you start it up and keep up the habit. And say something that is outrageously false and cannot stand? Well, of course you know that's just silly. There are people saying stupid things all around the Internet. Sorry, but you have no way of intervening with your righteous indignation everywhere. So, why not do it in communities that respect your privacy? Maybe ones you make yourself?
  • OK, what if they don't follow me to email or whatever? What, you're going to email your family and friends, and they know you've left Facebook, and they won't reply? Nice family and friends you have...I think mine will respond fine as long as I start the habit.

It's OK. Really. Just remember: Facebook and Twitter really, actually, sincerely do suck. You're not missing out on anything important, especially if you scratch the itches that they scratch in other ways.

So what will the next steps? Should I just, you know, delete my account?

If you, too, find yourself wanting to quit social media, maybe you'll be asking me for advice on how to do it. Well, I can't do better than tell you what my plans are. Obviously, though, your requirements are different from mine, so you should make your own damn plans.

I'm not saying I'm definitely going to do all of these, in just this order; this is more of a draft plan. The first step in every case is to figure out exactly what's going on and think it through. I'm also pretty sure that locking down my contacts is the first thing to do.

  1. Lock down my contacts. Since so much of the solution (for me) involves email, my first step will be to consolidate my email and phone contacts, putting them 100% out of the hands of Microsoft, Google, and Apple. Frankly, I've left my contacts to the tender mercies of these companies for so long that the data formats and redundancies and locations (etc.) confuse me.
  2. Email updates for family. Start regularly interacting with my family more regularly with a cc list and texts, or maybe I'll persuade them to use Telegram. Not like formal Christmas letters; more like the usual joking, self-pitying, and boastful notes we post on Facebook.
  3. Replace Facebook and Twitter conversational patterns and groups with specific email lists or maybe forums. Create some email cc lists or listservs, for friends, for cultural/philosophical allies, about Internet and programming, a replacement for the "Fans of Western Civilization" group I started, and no doubt a big list for all my acquaintances. Others as well. I'm going to look into and see if there aren't some improvements on the old ways of doing things available now. I might install some web forums, as I tried a year or two ago, but I doubt it. I don't think they'd get nearly as much use as email.
  4. Pull the trigger: delete my Facebook and Twitter accounts. I'll download all my data first, for posterity. I'll also give my Facebook friends my coordinates for the various lists (above) that might interest them. I'll leave my account up for a couple weeks, making regular announcements that I'm leaving and urging people to join my lists (or, if I use another technology, whatever that technology is).
  5. Move Medium, Quora, and maybe Facebook data to my blog. This could prove to be labor-intensive, but it'll eventually get done.
  6. Delete Medium (done) and Quora accounts. Won't be sorry to be gone from there. For me, anyway, this is a long-overdue move.

It's OK. Really. Just remember: Facebook and Twitter really, actually, sincerely do suck. You're not missing out on anything important, especially if you scratch the itches that they scratch in other ways.

When is deletion day for you?

I will actually press the delete buttons on February 18, about a month from now. I'll update this blog with specifics of how I do each task, and spam my social networks with repeated invitations to join various lists, because I'm going away, permanently this time.

I'm giving myself time because I want to talk about people about what I'm doing via social media, and try to spark a mass exodus among my friends, family, and followers. And who knows? Maybe we'll get Silicon Valley to notice, and they'll start competing to make better products, ones in which we aren't the product. If not, we're sure to benefit anyway.


Why does information privacy matter, again?

It's not just because you are a criminal and the coppers might catch you. Or because you really, really hate big corporations who just want to sell you stuff more easily. Or because you're paranoid.

If that's as far as your thinking goes, when people start talking about "privacy" on the Internet, you really need to bone up on the subject.

You probably already knew that you don't have to be criminal, paranoid, or anti-capitalist to be very jealous of your Internet privacy rights. After all, plenty of law-abiding, merely sensibly cautious, capitalism-loving people are freaking out about the way FAANG (Facebook, Apple, Amazon, Netflix, Google) companies, and many more, are creepily tracking their every move. Then those same corporations are selling the information and making it available to governments (or, at least, not going out of their way to stop governments from getting it).

Are people right to freak out about these privacy violations?

Yes, they are, or so I will argue. The threats come under three heads: corporate, criminal, and government. And let's not forget that in the worst-case scenario, the three heads merge into one.

The corporate threat

Left unchecked, in ten years, some of the biggest, most influential corporations will know (or have ready access to) not just your name, email address, phone number, age, sex/gender, credit card numbers, family relationships, friends, mother's maiden name, first car, favorite food, various social media metrics, browsing history, purchase history, as well as a large collection of content authored and curated by you. That's already bad enough (for reasons I'll explain). But they might add to their dossiers on you such things as your social security number, credit score, criminal record, medical history, voting history, religion, political party, government benefits, and more.

But how? Well, you might have asked that about the first list twenty years ago. How indeed? They'll create must-have devices and services that become very popular. Everybody has to have the device, or the service. Then they'll talk a good game when it comes to your information privacy and security, but they'll get their hands on your medical history, your credit score, your government benefits--and that will be it.

Imagine, too, the possibilities that highly motivated project managers will dream up when they can mash up your growing dossier with data from facial recognition, AI/big data text analysis, and other new technologies.

In such a situation, what information isn't private?

"But I can make up my own mind about what to buy," you say.

Well. Top-flight marketing and product people are, naturally, very good at what they do. It's not an accident that, once everybody and his grandma got online, some of the wretched Mark Zuckerbergs of the world would stumble on some platform that would connect us by our personal relationships, not care one bit about privacy, and hire people who are and become very, very, very good at manipulating us in all sorts of ways. They'll keep us online, give us more reasons to share more information, watch ads, and yes, buy stuff.

But corporate control of your private life is much more insidious than that.

Do you feel quite yourself when you're reading and posting on Facebook and Twitter, shopping on Amazon, watching and commenting on YouTube and Netflix, etc.? I admit it: I don't. We become more irrational when we get on these social networks. Sure, we retain our free will. We can stop ourselves (but often won't). We are the authors of what we write (as influenced by our echo chambers), which reflects our real views (maybe). We could quit (fat chance).

We have become part of a machine, run by massively powerful corporations, with their clever executives at the levers. Only part of what is so offensive about this machine is that we are influenced to buy things we don't need. What about radicalization--being influenced to believe things we haven't thought sufficiently about? What about self-censorship, because the increasingly bold and shameless social media censors (no longer mere "moderators") increasingly require ideological purity? What about the failure to consider options (for shopping, entertainment, socialization, discussion, etc.) that are outside of our preferred, addictive networks?

More importantly perhaps than any of those, what about the opportunity cost of spending our lives coordinated by these networks, with less time for offline creativity, meaningful one-on-one interaction, exercise, focused hard work, self-awareness, and self-doubt?

The machine, in short, robs us of our autonomy. As soon as we started giving up every little bit of information that makes us unique individuals, we empowered executives and technologists to collectivize us. It is not too much of a stretch to call it the beginnings of an engine of totalitarianism.

The criminal threat: privacy means security

If you've never had your credit card charged for stuff you didn't buy, your phone hacked, precious files held hostage by ransomware, your computer made inoperable by a virus, or your identity stolen, then you might not care much about criminal hackers. Several of these things have happened to me, and since I started studying programming and information security, I've become increasingly aware of just how extensive the dangers are.

Here's the relevance to privacy: keeping your information private requires keeping it secure. Privacy and security go hand in hand. If your information isn't private, that means it's not secure, i.e., anybody can easily grab it. You have to think about security if you want to think about privacy.

So, even if you (wrongheadedly) trust the Internet giants not to abuse your information or rob you of your autonomy, you should still consider that you're trusting them with your information security. If a company has your credit card information, government ID number, medical history and health data, or candid opinions, you have to ask yourself: Am I really comfortable with these companies' confident guarantees that my information won't fall into the wrong hands?

If you are, you shouldn't be. Think of all the data hacking of systems that, you might have thought, were surely hacker-proof: giant retailers like Target, internet giants like Facebook, major political parties, and heck, the NSA itself (not just the hack by Snowden).

No, your credit card info is not guaranteed safe just because the corporation storing it makes billions a year.

If you want to keep your information safe from malevolent forces, you shouldn't trust big companies. There are all sorts of ways bad actors can get hold of your information for nefarious purposes. They don't even always have to hack it. Sometimes, they can just legally buy it, a problem that legislation can make better--or worse.

The government threat

Remember when Edward Snowden revealed that the NSA has a (once) secret spy program that actually empowers it to monitor all telephone calls, emails, browser and search histories, and social media use? Remember when we all were shocked to learn that Bush and Obama, Democrats and Republicans had together created a monster of a domestic surveillance program?

I do. I think about it fairly often, although one doesn't hear about it that much, and the programs Edward Snowden uncovered, like NSA's PRISM, have not been canceled. That means (a) everything you do and access online can be put in government hands, whenever they demand it, and (b) it's no more secure than the NSA's security.

Remember when everybody left social media in droves and started locking down their Internet use, because otherwise the NSA would have easy access to their every move?

No, I don't remember that either, because it didn't happen. Nor, sadly, was there a popular revolt to get these programs repealed. I think many of us couldn't really believe it was happening; it just didn't seem real, it seemed to be about terrorists and spies and criminals, without any impact on us.

One thing that bothers me quite a bit is that pretty much the whole Democratic Party thinks Donald Trump is a crypto-Nazi and is one step from instituting fascism—but still, puzzlingly, nobody thinks to observe worriedly that he's in control of the NSA and can find trumped-up excuses to spy on us if he wishes. In other words, if Trump were a fascist and he did turn out to want to start the Fourth Reich here in the good ol' U.S. of A., it doesn't seem to bother many Democrats that Trump holds handy tools to do just that.

Meanwhile, Republicans often think the Democratic Party is beholden to social justice warriors that want to institute socialism, thought policing, censorship, and general totalitarianism. You know--fascism. But they, too, seem strangely uninterested to dismantle government programs that systematically monitor everyone.

Both sides think the other side is just desperate to lord it over us, the innocent, good salt of the earth. But nobody seems to care that the very tools that make a police state worse than 1984 possible are already in place. And they're only too happy to keep building and rewarding a corporate system that feeds directly into the NSA.

Government surveillance isn't that bad! Fascism will never happen here! We can keep putting our entire lives in the hands of giant corporations! So say the people whose direst fear is that the other side will consolidate even more power and start executing their secret desires to institute fascist control.

What to do

But it can happen here. That's why we need to start demanding more privacy from government.

If you're really worried about fascism, then let's defang the monster. Complain more about government programs that systematically violate your privacy rights. After all, knowledge is power, so NSA's PRISM program, and similar surveillance programs in other countries, is really just an undemocratic power grab. With enough of a public uproar, Democrats and Republicans really could get together over what should be a bipartisan concern: shutting down these enormously powerful, secretive government programs.

In the meantime, we need to wake up about our personal privacy.

Look--everything you do online has multiple points of insecurity. If you can see that now, then what's your response? Hope for the best? Throw your hands up in despair? Do nothing? Figure that decent people will eventually "do something" about the problem for you?

Don't count on it. If you aren't ready to start acting on your own behalf, why think your neighbor or your representative will?

Stop giving boatloads of information to giant corporations, especially ones who think you are the product, and contribute to the market for genuinely privacy-respecting products and services. If you don't, you're opening up that information to hackers who will exploit those points of security, and making it easier for governments everywhere to control their people.

Do your personal, familial, and civic duty and start locking down your cyber-life. I am. It'll take some time. But I think it's worth it and, soon, I'll be finished getting everything set up.

What if you and all your family and friends did this? If there were a groundswell of demand for privacy, we might create tools, practices, education, and economies that support privacy properly.

Think of it as cyber-hygiene. You need to wash your data regularly. It's time to learn. Our swinish data habits are really starting to stink the place up, and it's making the executives, criminals, and tyrants think they can rule the sty.


"OK," you say, "I'm convinced. I guess I should start caring about privacy. But really, how deep do I need to go into this privacy stuff, anyway? Well, I've answered that one, too."

Part of a series on how I'm locking down my cyber-life.


Stop giving your information away carelessly!

27 tips for improving your cyber-hygiene

Who is most responsible for your online privacy being violated?

You are.

Privacy is one of the biggest concerns in tech news recently. The importance of personal privacy is something everybody seems to be able to agree on. But if you're concerned about privacy, then you need stop giving your information away willy-nilly. Because you probably are.

Well, maybe you are. See how many of the following best practices you already follow.

  1. Passwords. Install and learn how to use a password manager on all your devices. There are many fine ones on the market.
  2. Let your password manager generate your passwords for you. You never even need to know what your passwords are, once you've got the password managers set up.
  3. Make sure you make a secure password for the password manager!
  4. Stop letting your browser save passwords. Your password manager handles that.
  5. If ever you have reason to send a password to another person online, break it into two or more files (texts, emails, whatever) in different media, then totally delete those files. Also, some password managers help with this.
  6. Credit cards and other personal info. Stop letting your browser save your credit cards. Your password manager handles that.
  7. Stop letting web vendors save your credit card info on their servers, unless absolutely necessary (e.g., for subscriptions). Again, your password manager handles that. Maybe you should go delete them now. I'll wait.
  8. If you give your credit card info out online, always check that the website has the "lock" next to its address on the address bar. That means it uses the https protocol (i.e., uses encryption).
  9. Stop answering "additional security" questions with correct answers, especially correct answers that hackers might discover with research. Treat the answer fields as passwords, and record them in your password manager.
  10. Stop filling out the "optional" information on account registration forms. Give away only the required information.
  11. Americans, for chrissakes stop giving out your social security number and allowing others to use it as an ID, unless absolutely required.
  12. Stop giving your email address out when doing face-to-face purchases. Those companies don't actually need it.
  13. Stop trusting the Internet giants with your data. Consider moving away from Gmail. Google has admitted it reads your mail—all the better to market to you, my dear. Gmail isn't all that, really.
  14. Maintain your own calendar. When meeting, let others add your name, but don't let them add your email address, if you have a choice.
  15. Maintain your own contacts. No need to let one of the Internet giants take control of that for you. It's not that hard. Then have them delete their copies.
  16. If you're an Apple person, stop using iCloud to sync your devices. Use wi-fi instead.
  17. Browser and search engine hygiene. Use a privacy-respecting browser, such as Brave or Firefox. (This will stop your browsing activity from being needlessly shared with Google or Microsoft.)
  18. If you must use a browser without built-in tracking protection (like Chrome), then use a tracker-blocking extension (like Privacy Badger).
  19. Use a privacy-respecting search engine, such as DuckDuckGo or Qwant. (Ditto.)
  20. Social media, if you must. On social media, start learning and taking the privacy settings more seriously. There are many options that allow you to lock down your data to some degree.
  21. Make posts "private" on Facebook, especially if they have any personal details. If you didn't know the difference between "private" and "public" posts, learn this. And a friend says: "Stop playing Facebook quizzes."
  22. Stop digitally labeling your photos and other social posts with time and location. Make sure that data is removed before you post.
    (Putting it in the text description is better.)
  23. For crying out loud, stop posting totally public pictures of your vacation while you are vacation. Those pictures are very interesting to burglars. Wait until you get home, at least.
  24. Sorry, but stop sharing pictures of your children on social. (This is just my opinion. I know you might differ. But it makes me nervous.)
  25. Consider quitting social media altogether. Their business models are extremely hostile to privacy. You (and your private info) are the product, after all.
  26. A couple of obvious(?) last items. Make sure you're using a firewall and some sort of anti-virus software.
  27. Don't be the idiot who opens email attachments from strangers.

How many did you answer "I do that!" to? I scored 22, to be totally honest, but it'll be up to 27 soon. Answer below. Well, answer only if you have a high score, or if you use a pseudonym. I don't want hackers to know who they can hit up for an easy win!