Talk back: Why should we have more restrictions on "harmful" speech on social media?

Dear all,

This is a different sort of blog post.

Rather than me writing yet another essay to you, I want to open the floor to you. I want you to answer something for me. It's like the subreddit "Change My View."

This is aimed specifically at my liberal and progressive friends who are very upset at the social media giants for letting things get so out of hand. See how much of the following applies to you:

You have become increasingly aware of how awful the harassment of women and minorities by the far right has become. You are really, sincerely worried that they have elected Trump, who isn't just a crass clown (many people agree with that) but basically a proto-fascist. You are convinced that Trump must have gotten elected because of the growing popularity of right-wing extremists. They engage in hate speech. Not only is this why Trump was elected, it's why people are constantly at each other's throats today, and why there has been domestic terrorism and mass murder by the right. Therefore, all mature, intelligent observers seem to agree that we need to rein in online hate speech and harmful speech.

I've heard all of this a lot, because I've sought it out in an attempt to understand it—because it freaks me out. Here's the thing: I think it's mostly bullshit. Yes, people (of all political stripes) have gotten nastier, maybe. I didn't vote for Trump and I dislike him. But beyond that, I think the entire line above isn't just annoyingly wrong, it's downright scary. This is largely because I have always greatly valued free speech and this above-summarized mindset has put free speech (and hence other basic liberal democratic/small-r republican values) at risk.

But I'm not going to elaborate my view further now; I mention it only to explain why I want your view first. I'll save an elaboration of my view in a response to you. What I hope you'll do, if you agree with the bold bit above, is to explain your sincere, considered position. Do your best to persuade me. Then, sometime in the next week or two, I'll do my best to persuade you, incorporating all the main points in your replies (assuming I get enough replies).

So please answer: Why should we more aggressively prevent harmful or hate speech, or ban people who engage in such speech, on social media? The "why" is the thing I'm interested in. Don't answer the question, please, if you don't agree with the premise of the question.

Here are some sub-questions you might cover:

  1. Did you used to care more about free speech? What has changed your mind about the relative importance of it?
  2. Do you agree with the claim, "Hate speech is not free speech"? Why?
  3. Exactly where did my "Free Speech Credo" go wrong?
  4. If all you want to say is that "free speech" only restricts government action, and that you don't think corporate actions can constitute censorship, but please also explain any thoughts you have about why it is so important
  5. If you're American and you want Uncle Sam to restrict hate speech, why do you think the law can and should be changed now, after allowing it for so many years? (Surely you don't think Americans are more racist than they were 50 years ago.)
  6. Does it bother you that "hate speech" is very vague and that its application seems to have grown over the years?
  7. If hate speech on the big social media sites bothers you enough to want to get rid of it, what's your stance toward blogs and forums where racists (or people who want to call racists) congregate?
  8. Where should it end, generally speaking? Would you want the National Review banned? Don't just say, "Don't be ridiculous." If that's ridiculous, then where do you draw the line between, for example, banning Paul Joseph Watson from Facebook and using government power to take down a conservative opinion journal?
  9. By the way, do you think it's possible for conservatives and libertarians to be decent people? Honest? Intelligent? Do you think they are all racists? Do you think that articulating all or many conservative or libertarian positions is essentially racist or harmful speech?

Basically, if enough people answer these questions (one or all), I think that'll give me an idea of how your mind actually works as you think this stuff through. This will enable me to craft the most interesting response to you. I want to understand your actual views fully—i.e., not (necessarily) some academic theory, but your real, on-the-ground, down-to-earth views that results in your political stance.


Social Media Strike!

This content is password protected. To view it please enter your password below:



FAQ about the project to decentralize social media

This content is password protected. To view it please enter your password below:



Declaration of Digital Independence

This content is password protected. To view it please enter your password below:



Some thoughts on the new Voice.com project

This evening we finally learned what the #B1June hype was all about: among other things, a new social media system called Voice.com, built by Block.one, the company behind the outrageously well-performing EOS token. (Full disclosure: Everipedia, where I am CIO, is built on EOS and is the recipient of a major investment from Block.one.)

The site isn't operational yet, and I couldn't find an app in Apple's App Store, but you can sign up for the beta on Voice.com and view a very interesting-sounding rundown of features.

In their introduction to the project this evening at a very glitzy gala event at the D.C. Armory in Washington, D.C., CEO Brendan Blumer and CTO Dan Larimer said that there were huge problems with existing social media giants. The small changes Big Social Media is likely to make won't solve the root problem: you are the product. As long as the social media giants make their business the collection and sale of data about you, you will lack control over your data and your user experience.

They also find a serious problem in fake accounts. Certainly I wonder how many accounts upvoting my posts on Twitter correspond to at least one person, and some responses one sees there sound mindless and robotic enough to have come from bots.

The fact that Block.one has got that much right makes me optimistic about what will be eventually released.

The coming features they advertise:

  • Voices.com will confirm that every user is a real person. I pressed Block.one engineers for information on how this would work, but they remained mum.
  • The Voice network features a new token, the Voice token (I think it's officially rendered as $VOICE). The only way to create the token is when others upvote your content. There will be no ICO or airdrop. And you can't purchase Voice tokens. That's kind of neat. No word on whether you can cash in your Voice in dollars or EOS somehow. A fair bit is rather vague at this point, to be honest.
  • If you have a message you want to get out, you can spend Voice tokens that you have legitimately earned to boost it, even to the top of a queue (not sure which queue). If others agree that your post is important and upvote it, you can get your Voice back and then some. That's kind of neat.

To my mind, there are as many questions raised as answered here. Anyway, I had two thoughts I wanted to pass on to Block.one and to the Internet void.

First, getting "one person, one account" correct and operational is very important and very hard, and I'll be watching closely to see if they've done it. As I explain in a requirements paper I'm at work on, there are at least four requirements of such a system:

  1. That a person with some essential uniquely identifying information (such as, perhaps, a name, a birthplace, and an email address) actually exists.
  2. That the person thus uniquely identified is actually the owner of a certain account on the network (and thus bears that name, has that birthplace, and owns that email address).
  3. That the person is not in control of some other account. (This is particularly difficult, but it is required if it is one person, one account.)
  4. That the person remains in control (and has not passed on or lost control of the account).

This, or something like it, I want to propose as the gold standard of online identity. I take an interest in this because we need to verify that Everipedia accounts are "one person, one vote" (OPOV) accounts for purposes of voting on encyclopedia articles.

Let's see how many of these requirements the new EOS identity protocol can satisfy.

Second, since Everipedia is built on EOS, I very much hope Voice.com ends up being fully decentralized. The first requirement of a fully decentralized system is to use open, common standards and protocols needed to publish, share, and give all users control over their own social media experience, regardless of which app they use. But I heard nothing about open, common social media standards this evening, and while the Block.one engineers I spoke to this evening did say they were considering adopting some such standards, it didn't sound like that would be part of the upcoming launch. I could be surprised, of course.

Another requirement is that posts from outside of the network should be readable (if a user so desires) inside Voice.com feeds. Otherwise, each social media ecosystem is its own silo—and not decentralized. I'm not sure if Voice.com is working on this.

Actually letting users export their Voice.com data very easily (i.e., with RSS-like feeds) so that their friends outside of the new social network can view their posts on other networks is another crucial requirement the new project will have to tackle, if they want me 100% on board.

Finally, lots of fine-grained control over how the user's feed works will all by itself go a long way to convincing me that a company is serious about letting users take back control. No word yet on whether this is in the works for Voice.com, although I did see a nod in that direction.

I would encourage Block.one to consider adding these features so that I can get behind them in the upcoming push for a Declaration of Digital Independence (about a month away), accompanied by a social media boycott and, eventually, mass alternative social media try-outs.

One last thing. I would like to know whether Voice.com will have an end-to-end encrypted messaging system. This isn't easy for anyone to build, but if you want to go head-to-head with the big boys and demonstrate commitment to privacy, it's a very good idea. Maybe Sense Chat can help, since they're moving to EOS. I am thinking more about the importance of this, being already very convinced of the importance of privacy; in fact, I'm increasingly hardcore about it. (I'll be very curious to read Voice.com's new privacy and community policies. Minds.com just updated theirs, y'know.)

But Block.one does seem to be on board; after all, they gave every attendee a hardware security key, something I was going to buy soon anyway. Thanks, guys!


The NAS revolution: Get your data out of the cloud

It turns out the cloud is kind of evil. We blithely put all our data online, right in the hands of giant corporations (and by extension, hackers and governments) who only too happily control, sell, datamine, steal, and spy on it. But you can take control of your data. Now. Here's how.

When most people hear "the cloud," if they have any inkling of what it means, they think of Dropbox, Google Drive, and other file storage and synchronization services of that sort. But if you're hip to the scene, "the cloud" extends to any service that manages your personal data online. The emphasis is on personal data. The cloud, rather than a device of yours, stores data like your calendar (as hosted by, say, Google Calendar) and contacts (as hosted by, say, Apple's iCloud) as well.

If you're a typical plugged-in Internet user, "the cloud" in general manages a stunning amount of your data:

  • Document storage and sync: this includes all the files you might have put in Dropbox, Google Drive, Google Documents, iCloud, Box, Amazon Drive, or Microsoft's OneDrive.
  • Email: Gmail is the 800-pound gorilla, of course.
  • Calendar: Google Calendar and iCloud storage dominate here.
  • Contacts and address books: Google, Microsoft, and iCloud.
  • Online photos: Instagram, Facebook, Google Photos, Flickr, iCloud, and Dropbox all have cloud solutions for sharing your pictures with friends and family.
  • Home video: Facebook and YouTube are probably the main ways we have of storing and sharing our videos with family and friends. There are other options, of course.
  • Movies/TV shows: If you paid for commercially-produced videos that you own the digital rights to, they're in the cloud. This is the direction Apple, Amazon, and YouTube, for example, want you to move in.
  • Notes: Your phone's note-taking app, etc.: iCloud, Evernote, OneNote. The home of your note data is in the cloud, not on your machine.
  • Password apps: Your browser's password saving + sync feature uses the cloud, as do Dashlane, LastPass, 1Password, Enpass, etc.
  • Bookmarks: Your browser (Chrome, Firefox, others) probably syncs your bookmarks for you; the bookmark data is in the cloud.
  • Chat: Yes, chat isn't just a social media type of app. It's also a cloud app for use by private consumers dealing in small groups or one-on-one. If you're like me, you have private chats not just with random strangers, but also with family and friends. Insofar as this data can be presumed to be highly private, it's also "in the cloud" and not just "online."
  • Your blog: If you used to host your own blog, but now write for Medium, Quora, Blogger, Tumblr, WordPress.com, or some other blogging platform, then your blog is now "in the cloud," hosted alongside a zillion other blogs. That goes for web hosting in general, too.
  • Code hosting platforms: If you check your code in on Github or Gitlab, or run it on Digital Ocean or Heroku, your code is in the cloud.

Look at that list, and consider: an amazing amount of our computing is out of our immediate control.

There are two perfectly good reasons for this. First, we own multiple devices and we need to share and sync data among them. We also want to be able to share data with friends and family more easily. But, because this involves networking, it is a much more technically difficult problem for programmers to solve than simply writing desktop software. Since networking and sharing are already done via the Internet, it just makes sense for sharing and syncing services to be coordinated by Internet companies.

Second, simply letting centralized corporate services handle this data coordination is terribly convenient—that's hard to deny.

The necessity of sharing our data, coupled the undeniable convenience of the cloud, sure make it look like the cloud is going nowhere. I mean, what are you going to do, host your own calendar, home videos, and chat apps? How will you sync the data? That's a non-starter for non-technical people. Why not just let the professionals handle it?

But it so happens that, now, you can host your own stuff. How? I'll explain. But first, let's talk a bit about why you might want to host your own stuff.


We are increasingly suspicious of various cloud services, and we should be. It's not just Facebook selling your private chats with Netflix and Spotify, or Medium dictating what you can write in your blog, or Google datamining student data in the cloud—to take a few rather random examples. The events of the last couple years have brought home to many of us some truths we simply didn't want to believe.

What kind of truths?

The vast majority of the cloud services listed above are run by for-profit businesses who naturally place their profits above your interests.

Your data, for them, is an asset. Many cloud companies crucially depend on the ability to exploit data assets. They will sell your data if they can. If they can't, they'll datamine it and sell information about you.

You agreed to that.

You are, like it or not, a participant in many large, standarized systems. Therefore, even though you simply want to use a basic service, if you don't play by their rules, they can control or even block you. Moreover, you probably can't customize the service too much for your own uses. The service providers make the choices for you. You have to go with the flow.

Search and subpoena laws, censorship laws, and government regulations apply to corporations that do not apply to you, the individual. That means information you put in corporate clouds is under the watchful gaze not just of those corporations but also of governments. If you're lucky, you live in a country that respects privacy and free speech even when your data is on a corporate server. But don't count on it.

The reason so many violations of your privacy (something most of us should be a lot more hardcore about) have come to light is that so much of our data is in the cloud now, and a lot of people in business just don't care very much about your privacy. When will Google start using zero-knowledge encryption for all your data that they store? Never. They want access to your data. They need access to your data. It's their business.

Sorry, but them's the facts.

What can we possibly do? Are we at their mercy? Should we, perhaps, trust governments—who also want access to all your data, for your safety—to monitor, regulate, and improve the situation?

But you can take back your data. Now. And if this is news to you, let me admit to you that it was news to me a few months ago when I first heard about it: you can install and manage your very own personal cloud for every single one of the cloud services listed above. And it's not expensive. And it's not that hard to do.

I know it sounds bizarre. It is bizarre, but it's true.


A NAS, or network-attached storage device, was once thought of mainly as a hard drive (or several) attached to your network. But as NAS vendors began selling devices with their own operating systems and Internet connections, the term was repurposed to mean your very own turn-key server. Turn it on, put your stuff on it, and you can access your personal data from anywhere.

NASes are easy to use, but "turn-key" is not quite right. No NAS on the market, that I know of, is as easy to start using as a regular computer is. Getting one up and running takes some time; there is, as they say, a learning curve. But "turn-key" does get the flavor of the most popular NAS brands. The NAS devices for sale by Synology and QNAP especially, and others to a lesser extent, are intended to make it easy to have your own server, or your own "cloud." In fact, Western Digital (WD) sells NASes under the brand name "My Cloud" and markets them as "personal clouds." There's a bit of challenge, but it's not that hard to set these things up (more details below).

The reason to get a NAS, for me—or to get any personal server—is to replace all the software that has moved to the cloud. In case you're skeptical, let me give you a rundown. While I'll be talking about the NAS I just installed for myself and my family, which happens to be from Synology, there's an equally well-reviewed NAS system available from QNAP, and for those who have more technical skill, NextCloud (perhaps on a FreeNAS machine you set up) does many of the same things.

Let's just go down the list I gave above.

  • Document storage and sync. I now have an app that can sync documents on at least eight of my family's devices. I can update the document on my desktop, and if I save it in the Synology's office format, I can edit it directly in the browser, with changes showing up for other users in real time, just like Google Docs. There are documents, spreadsheets, and slides. Chat with other user accounts on your NAS (for me, my family members) is available in every document. This is available everywhere, because it's truly in the cloud. It's just that it's your cloud.
  • Email: You can host your own email on a NAS, if you want to go to heroic lengths that I don't recommend. Like web hosting, this is something you probably should leave to the professionals, for now. I have a feeling this is going to change in coming years, though.
  • Calendar: There's a rather nice app for that.
  • Contacts and address books: It's not "turnkey" yet. But something is available.
  • Online photos: Synology's Moments app automatically syncs your pictures with your camera, identifies people (without sharing data with Synology), uses (stand-alone) sophisticated algorithms to put pictures into categories, etc. Again, the pictures are available for quick and easy download from anywhere, and you don't have to worry about Dropbox or Google or whatever snooping.
  • Home video: Ditto—Moments works fine for this, but so does Video Station. Easily share your home movies with grandma, right from your own machine.
  • Movies/TV shows: Rip all your DVDs and Blu-Rays, then stream them anywhere (to your phone, tablet, computer, or TV) with an interface that looks a lot like Netflix. No need to rely on Apple or Amazon to keep digital copies of your movies for you. Wouldn't you much rather own and serve your own copies? I know I would.
  • Notes: There's an app for that, both for browsers and for your phone.
  • Password apps: Use your NAS's WebDAV server to sync your password data on your own machine; WebDAV is something that Enpass, for example, supports.
  • Bookmarks: Synology and QNAP offer no solution yet, but Nextcloud (which can be run on both) does.
  • Chat: There's a pretty awesome app for that; it closely resembles Slack. There are decent clients for browser, desktop, and mobile, again just like Slack.
  • Your blog: NASes allow you to host blogs and simple websites using your choice of platforms, such as WordPress, Drupal, and Joomla. I'm not saying I recommend this, though; your machine would have to be pretty beefy to handle the traffic you want to get. Server hosting for your blog is another thing that's best left to the professionals. But it's pretty damn cool that you could use a NAS for this.
  • Code hosting platforms: Would you rather not check in your code publicly or on an external server at all? Want to keep it to yourself but continue to be able to share it with people and use Git? There's an app for that. You can also host more advanced websites with many popular programming languages (including Ruby, which I use).

A NAS (which, again, comes in many brands, not just the one I happened to buy) can do all that for you. It's pretty awesome.

But maybe this shouldn't be surprising. After all, a NAS is a fully-functional server, and web hosts now bundle all sorts of turn-key (that word again) software solutions and make it available to their clients. So if you go to GoDaddy or Inmotion Hosting or whatever, they offer all sorts of complex software available to install at the press of a button. Why not slap similar software bundles on a server and sell it to the ordinary consumer? That's what NASes do. (And again, for reasonably skilled IT professionals with time on their hands, they can more easily than ever create their own real servers, which are typically much more powerful and cheaper than NASes. With a proprietary NAS system like Synology, you pay a lot for integrated software, ease of use, and support.) Then just think: insofar as cloud services are, essentially, just putting formerly private data online in the context of a server someone else manages, as soon as consumer web servers became feasible, it makes total sense that you could move your data back to a server you manage.

What do we have to thank for this? The years of fantastic labor by programmers to build and refine all the necessary software layers and scaffolding needed to create something like a "turnkey" solution to running your own server, complete with multiple, ready-made software packages—even if you are nowhere near a professional server administrator.

Put even more simply, a NAS device gives you the power to take control of your own data in your own home. It used to be that we had to rely on the Apples, Googles, and Microsofts of the world in order to connect all the devices we own together, share data with friends, and get the use of common Internet services. With the advent of increasingly easy-to-use NASes, we don't have to. We can declare our independence from Big Tech.


But, you ask, doesn't all this rather awesome software power cost a lot of money? Well, entry-level NAS devices (like this from Synology and this from QNAP) cost less than $200, plus another $80 (say) each for a couple of hard drives. I'm not saying I recommend buying a cheap machine like this, any more than I would recommend buying a cheap laptop. But that might serve your purposes just fine. The point is that these machines are basically computers, so they cost about as much as a computer. The Synology NAS and three drives I got (with space for two more drives whenever I want), together with my fancy new router and modem, cost a little more than my new laptop. (By the way, if you have the time and technical chops to able to set up and maintain a web server with less support, it's easier than ever to do so, and for the same amount of money, you could get a machine that would be much faster and better than my NAS.)

"OK," you say, "maybe it's possible to set up. But how good could it be? I mean, you really think I'll be able to replace my family's Slack group with Synology's chat app? It must be inadequate. Or replace Google Docs with their Office app? That seems unlikely."

Before I saw the capabilities of the systems, that's what I thought, too. Then when I got my own, and started using it (several days ago), the proverbial scales fell from my eyes, and I'm a believer. This is surprisingly solid software. It might have been "bleeding edge" a few years ago, but it's excellent today. The functionality is all accessible via the browser, but there are also a few good desktop apps. It also comes with a lot of excellent iOS apps that you can use to access your NAS's functionality. So far I've installed the photo app (replaces whatever you used to upload your pix to permanent storage and gives you access to all of your pictures, not just the ones currently on your phone), the chat app, the drive app (which is a replacement for both Google Docs and Dropbox), the video app (which allows me to stream videos my boys are ripping from our DVD collection), the notes app (replaces iOS Notes), and the calendar app. So far, I don't see any advantages Slack has over the chat app (just for example). Their collaborative document editing app Synology (Office, installed when you install Drive) is excellent for basic editing, and it seems to be just as good as Google Docs.

"OK," you say, "maybe it's not that expensive, and maybe it's decent quality software. But isn't this a lot of work to install?"

Less than you might think. But it depends on what you mean by "a lot." It takes a few hours, maybe, to turn the thing on, network it with your devices, and get the first services up and running. You'll probably spend more time actually picking the thing out and upgrading your Internet speed as well as modem and router (which is something you'll need to do if you have old equipment). It takes more hours (depending on how much of the functionality of the thing you use) to get the full range of functionality set up—anywhere from ten minutes to several hours, depending on the app. Getting started with Synology's chat app is dead simple, for example, but importing all your pictures might take serious time. A lot of the time I've spent so far has been in migrating data from the Internet and my desktop and backup drives to the NAS.

So, sure, it takes a reasonable time investment. But it is so worth it.

"But," you say, "I'm not a terribly technical person. I can run all the software of the sort you mention if somebody has set it up for me in the cloud, but I can't imagine running my own server."

It's not that bad. Let's just say you need to be a "power user" if you want to do it all yourself. If you have ever set up your own WordPress website, or installed Linux, or registered and pointed a domain name (without help), or done basic programming, then you're up to the task of installing one of these devices without too much help. If you're just a regular computer user, but you have never done anything like that, then installing a NAS might be a bit beyond you. You still might be able to handle it, though.

In any case, I'll bet you know someone who could install one for you if you bought them dinner, or paid them a little. It's not a huge deal. It's not like "setting up your own web server." It's more like "setting up your own home network." It's easy enough for the local geeks to handle.

If you don't have access to a geek, you can hire one.Here's a service, Amazon does it more cheaply, probably Best Buy would do it, some of these guys could do it, etc.


In short, installing and running your own server is today approximately as difficult as computer installation was in 1985, or home networking in 1995, or home theater today. (As it happens, NASes are often purchased as a component in a home theater system.)

The low price and high value of NAS devices, together with their ease of installation, makes me think they're ready to take over the world. I for one am never going back to centralized cloud corporations. I hate them (yes, even Apple), and a growing number of people share my feelings: we absolutely despise the encroachments of those corporations on our privacy and liberty.

Many of us are looking for answers. Many are already doing the sorts of things I listed back in January in "How I'm locking down my cyber-life." In their responses to me there, a few people mentioned they were using their own cloud servers. (Those mentions are what first introduced me to NASes, so please keep up the excellent blog comments!) That struck me at first as being a little too hardcore. Having actually bought and installed a NAS, though, I don't think so. Getting your first NAS is like getting your first computer back in the 80s, or your first smartphone in the 00s. You might have had to wrap your mind around it. It causes a bit of trouble. It requires some getting used to. But probably, you'll forevermore have a computer and a smart phone.

The consumer potential of NAS devices strikes me as being potentially similar. Maybe it will become the sort of device that will seem indispensable in 10 or 20 years. I imagine a conversation with a future child, looking back at the cloud era of 2005-2025:

Child: "How could we ever choose to just give all our data to giant corporations? It was so insecure and allowed mass surveillance by government. Were people crazy?"

Greybeard: "Sort of, but you can't really blame us. During that time, the software for NASes wasn't developed well enough yet for ordinary people to run their own servers. But once a few companies started really nailing it, everybody started buying their own NASes, because it was easy. The people who kept using Gcal, Dropbox, Google Docs, Instagram, etc.—well, if you were as old as I am, you'd know what these are—those people started looking uncool. All the cool kids were serving their data themselves."

Child: "Like everybody does now?"

Greybeard: "Yes, like everybody does now."

That could happen. But is it realistic? Time will tell. Sure, it's possible that owning your own cloud server will forever be the domain of geeks. But an industry analysis from a year ago says we're moving in that direction:

The NAS market is witnessing an accelerated growth and is projected to register robust [20%] growth over the forecast timeline [to 2024] due to the rapidly increasing applications of Big Data analytics & data mining, increasing popularity of NAS solutions in home/consumer applications, and the growing adoption of cloud-based network attached storage solutions.
Global Market Insights, May 2018


In the struggle against privacy incursions, we have tools beyond NASes, of course. In fact, I see two other, concurrent trends that will allow us to fight back. There is the growing demand to own your own data and decentralize social media. (I was writing and speaking a lot about that in the last few months, but don't think I've dropped the issue.) And there is, of course, the massive, revolutionary impact of blockchain, the essential effect of which is to disintermediate economic relationships. Being all about encryption, the blockchain world holds out the promise of a new kind of secure, private, encrypted cloud computing.

Allow me to speculate about how the Internet might work in ten or twenty years.

Many of us (I imagine someone saying, a few decades hence) have installed a NAS or, if we're geekier, have a server rack at home. Pretty much all small businesses run their own NASes as well. From these devices, we serve most of the data that was formerly held by Google, Apple, Microsoft, etc. Many of us even run our own mail servers, both because it's more secure and because the software and industry standards have improved so much that it became feasible. Our blogs are also hosted at home; the shift came with NAS tools that made it dead simple to transfer data and settings from remote servers to our local one.

Of course, some of us hit the big time with our blogs and websites. But they are still run from home. This is not something we could possibly have imagined in 2010. At that time, no one even imagined the implications of distributed computing on the blockchain, of which EOS was an early supporter. Whenever we update our NAS, it communicates with various blockchain services using zero-knowledge encryption. This shares out our data (and, when we choose, the keys to unlock it) among many other users who participate in the same system; thus our NASes are constantly working, supporting the whole tech ecosystem. We have no way of knowing which encrypted Internet services are being worked on in this decentralized cloud, which is much more of a "cloud" than the early Dropbox ever was. In any event, if a blog of ours gets a lot more traffic than our NAS can handle, then if we have turned on blockchain integration, the traffic is assembled and served using many other machines—and we, of course, have to pay more into the system or else our users will experience bad old-fashioned server lag.

In a similar way, our social media data is served, and locked down, using our own NASes. The days of Facebook selling our private, proprietary data are long over; social media companies still have dossiers on you, but they aren't as thick, and they aren't informed by any private information.

Perhaps what really got the ball rolling was Edward Snowden in 2013 and others revealing that the NSA (and other government agencies) were listening in on pretty much everything you do online. Once Facebook repeatedly made it clear that they don't care one little bit about your privacy, and people started moving their social media data to their NASes, the usual suspects in government began to complain loudly that encryption prevented them from their mass surveillance. They didn't put it that way, of course, but that's what they were upset about. They really didn't like it when NAS companies made easy, turnkey drive encryption standard and started pushing and teaching two-factor authentication.

In any event, now that social media content is served from our NASes—with support from blockchain networks—your feed is constructed by pulling your data from literally all over, but incredibly fast, because requests can be fulfilled from many different machines, some of which are bound to be nearby.

There was a time when IoT (the Internet of Things) was regarded as not very viable, because people didn't want to buy objects that could be used to spy on them. NASes and the blockchain, again, changed all that. When open source NAS software came into existence proving that your IoT data was stored on your NAS and unlikely to leak out (or, no more than any other of your data), and that it was always routed using encryption, and when this data became possible to sell on the blockchain without compromising your personal security, the whole ecosystem just took off: that's when "secure, monetizable IoT data" became a thing. Even data from your car is routed through your NAS (not through the NSA) if everything is set up properly, so that the NSA and automobile manufacturers can't spy on you. Of course, in an emergency, your data is sent by the fastest (and less secure) route possible, but you always get a notice in that case.

In a lot of ways, the Internet is the same as it was in the 1990s and 2000s. But most websites store your information encrypted in the blockchain, and they know they have to interact via blockchain services if they want to do work on it securely—because nobody is willing, any longer, to expose their data if they don't have to.


Well, we can dream.


Zuckerberg Is Wrong: Don't Regulate Our Content

Last Sunday, Mark Zuckerberg made another Facebook strategy post. (This is his second major policy post in as many months. I responded to his March 6 missive as well.) Unsurprisingly, it was a disaster.

I want to shake him by his lapels and say, "Mark! Mark! Wrong way! Stop going that way! We don't want more snooping and regulation by giant, superpowerful organizations like yours and the U.S. government! We want less!"

He says he has spent two years focused on "issues like harmful content, elections integrity and privacy." If these have been the focuses of someone who is making motions to regulate the Internet, it's a good idea to stop and think a bit about each one. They are a mixed bag, at best.

1. Zuckerberg's concerns

Concern #1: "Harmful content"

Zuckerberg's glib gloss on "harmful content" is "terrorist propaganda, hate speech and more." Applying the modifier "harmful" to "content" is something done mainly by media regulators, giant corporations like Facebook, and the social justice left. Those of us who still care about free speech—and I think that's most of us—find the phrase not a little chilling.

Let's be reasonable, though. Sure, on the one hand, we can agree that groups using social media to organize dangerously violent terrorism, or child pornography, or other literally harmful and illegal activity, for example, should be shut down. And few people would have an issue with Facebook removing "hate speech" in the sense of the KKK, Stormfront, and other openly and viciously racist outfits. That sort of thing was routinely ousted from more polite areas of the Internet long ago, and relegated to the backwaters. That's OK with me. Reasonable and intellectually tolerant moderation is nothing new.

On the other hand, while all of that can perhaps be called "harmful content," the problem is how vague the phrase is. How far beyond such categories of more uncontroversially "harmful" content might it extend? It does a tiny bit of harm if someone tells a small lie; is that "harmful content"? Who knows? What if someone shares a conservative meme? That's sure to seem harmful to a large minority of the population. Is that a target? Why not progressive memes, then? Tech thought leaders like Kara Swisher would ban Ben Shapiro from YouTube, if she could; no doubt she finds Shapiro deeply harmful. Is he fair game? How about "hateful" atheist criticisms of Christianity—surely that's OK? But how about similarly "hateful" atheist criticisms of Islam? Is the one, but not the other, "harmful content"?

This isn't just a throwaway rhetorical point. It's deeply important to think about and get right, if we're going to use such loaded phrases as "harmful content" seriously, unironically, and especially if there is policymaking involved.

The problem is that the sorts of people who use phrases like "harmful content" constantly dodge these important questions. We can't trust them. We don't know how far they would go, if given a chance. Indeed, anyone with much experience debating can recognize instantly that the reason someone would use this sort of squishy phraseology is precisely because it is vague. Its vagueness enables the motte-and-bailey strategy: there's an easily-defended "motte" (tower keep) of literally harmful, illegal speech, on the one hand, but the partisans using this strategy really want to do their fighting in the "bailey" (courtyard) which is riskier but offers potential gains. Calling them both "harmful content" enables them to dishonestly advance repressive policies under a false cover.

"Hate speech" functions in a similar way. Here the motte is appallingly, strongly, openly bigoted speech, which virtually everyone would agree is awful. But we've heard more and more about hate speech in recent years because of the speech in the bailey that is under attack: traditional conservative and libertarian positions and speakers that enfuriate progressives. Radicals call them "racists" and their speech "hate speech," but without any substantiation.

It immediately raises a red flag when one of the most powerful men in the world blithely uses such phraseology without so much as a nod to its vagueness. Indeed, it is unacceptably vague.

Concern #2: Elections integrity

The reason we are supposed to be concerned about "elections integrity," as one has heard ad nauseam from mainstream media sources in the last couple years, is that Russia caused Trump to be elected by manipulating social media. This always struck me as being a bizarre claim. It is a widely-accepted fact that some Russians thought it was a good use of a few million dollars to inject even more noise (not all of it in Trump's favor) into the 2016 election by starting political groups and spreading political memes. I never found this particularly alarming, because I know how the Internet works: everybody is trying to persuade everybody, and a few million dollars from cash-strapped Russians is really obviously no more than shouting in the wind. What is the serious, fair-minded case that it even could have had any effect on the election? Are they so diabolically effective at propaganda to influence elections that, with a small budget, they can actually throw it one way or another? And if so, don't you think that people with similar magically effective knowhow would be on the payroll of the two most powerful political parties in the world?

Concern #3: Privacy

As to privacy—one of my hobby horses of late—Zuckerberg's concern is mainly one of self-preservation. After all, this is the guy who admitted that he called you and me, who trusted him with so much of our personal information, "dumb f--ks" for doing so. This is a guy who has built his business by selling your privacy to the highest bidder, without proposing any new business model. (Maybe they can make enough through kickbacks from the NSA, which must appreciate how Facebook acts as an unencrypted mass surveillance arm.)

Mark Zuckerberg has absolutely no credibility on this issue, even when describing his company's own plans.

He came out last month with what he doubtless wanted to appear to be a "come-to-Jesus moment" about privacy, saying that Facebook will develop the ultimate privacy app: secret, secured private chatting! Oh, joy! Just what I was missing (um?) and always wanted! But even that little bit (which is a very little bit) was too much to hope for: he said that maybe Facebook wouldn't allow total, strong, end-to-end encryption, because that would mean they couldn't "work with law enforcement."

The fact, as we'll see, that he wants the government to set privacy rules means that he still doesn't care about your privacy, for all his protestations.

Zuckerberg's declared motives are dodgy-to-laughable. But given his recommendation—that the government start systematically regulating the Internet—you shouldn't have expected anything different.

2. Mark Zuckerberg wants the government to censor you, so he doesn't have to.

Zuckerberg wants to regulate the Internet

In his previous missive, Zuckerberg gave some lame, half-hearted ideas about what Facebook itself would do to shore up Facebook's poor reputation for information privacy and security. Not so this time. This time, he wants government to take action: "I believe we need a more active role for governments and regulators." But remember, American law strives for fairness, so these wouldn't be special regulations just for Facebook. They would be regulations for the entire Internet.

"From what I've learned," Zuckerberg declares, "I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability."

When Zuckerberg calls for regulation of the Internet, he doesn't discuss hardware—servers and routers and fiber-optic cables, etc. He means content on the Internet. When it comes to "harmful content and election integrity," he clearly means some harmful and spurious content that has appeared on, e.g., Facebook. When he talks about "privacy and data portability," he means the privacy and portability of your content.

So let's not mince words: to regulate the Internet in these four areas is tantamount to regulating content, i.e., expression of ideas. That suggests, of course, that we should be on our guard against First Amendment violations. It is one thing for Facebook to remove (just for example) videos from conservative commentators like black female Trump supporters Diamond and Silk, which Facebook moderators called "unsafe." It's quite another thing for the federal government to do such a thing.

Zuckerberg wants actual government censorship

Now, before you accuse me of misrepresenting Zuckerberg, look at what his article says. It says, "I believe we need a more active role for governments and regulators," and in "four areas" in particular. The first-listed area is "harmful content." So Zuckerberg isn't saying, here, that it is Facebook that needs to shore up its defenses against harmful content. Rather, he is saying, here, that governments and regulators need to take action on harmful content. "That means deciding what counts as terrorist propaganda, hate speech and more." And more.

He even brags that Facebook is "working with governments, including French officials, on ensuring the effectiveness of content review systems." Oh, no doubt government officials will be only too happy to "ensure" that "content review systems" are "effective."

Now, in the United States, terrorist propaganda is already arguably against the law, although some regret that free speech concerns are keeping us from going far enough. Even there, we are right to move slowly and carefully, because a too-broad definition of "terrorist propaganda" might well put principled, honest, and nonviolent left- and right-wing opinionizing in the crosshairs of politically-motivated prosecutors.

But "deciding what counts as...hate speech" is a matter for U.S. law? Perhaps Zuckerberg should have finished his degree at Harvard, because he seems not to have learned that hate speech is unregulated under U.S. law, because of a little thing called the First Amendment to the U.S. Constitution. As recently as 2017, the Supreme Court unanimously struck down a "disparagement clause" in patent law which had said that trademarks may not "disparage...or bring...into contemp[t] or disrepute" any "persons, living or dead." This is widely regarded as demonstrating that there is no hate speech exception to the First Amendment. As the opinion says,

Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express “the thought that we hate.” 

The trouble with the phrase "hate speech" lies in both the ambiguity and the vagueness of the word "hate" itself. "Hate speech" in its core sense (this is the motte) is speech that is motivated by the speaker's own bigoted hatred, but in an ancillary sense (this is the bailey), it means speech that we hate, because in our possibly incorrect opinion we think it is motivated by bigotry (but maybe it isn't). The phrase "hate speech" is also vague and useless because hate comes in degrees, with shifting objects. If I am irritated by Albanians and very mildly diss them, am I guilty of hate speech? Maybe. Jews? Almost certainly. What about white male southerners? Well, what's the answer there? And what if I really strongly hate a group that it is popular to hate, e.g., rapists?

There's much more to be said about this phrase, but here's the point. If government and regulators took Zuckerberg's call for hate speech legislation to heart, what rules would they use? Wouldn't they, quite naturally, shift according to political and religious sentiments? Wouldn't such regulations become a dangerous political football? Would there be any way to ensure it applies fairly across groups—bearing in mind that there is also a Fourteenth Amendment that legally requires such fairness? Surely we don't want the U.S. legal system subject to the same sort of spectacle that besets Canada and the U.K., in which people are prosecuted for criticizing some groups, while very similar criticism of other, unprotected groups goes unpunished?

But precisely that is, presumably, what Zuckerberg wants to happen. He doesn't want to be responsible for shutting down the likes of Diamond and Silk, or Ben Shapiro. That, he has discovered, is an extremely unpopular move; but he's deeply concerned about hate speech; so he would much rather the government do it.

If you want to say I'm not being fair to Zuckerberg or to those who want hate speech laws in the U.S., that of course you wouldn't dream of shutting down mainstream conservatives like this, I point you back to the motte and bailey. We, staunch defenders of free speech, can't trust you. We know about motte and bailey tactics. We know that, if not you, then plenty of your left-wing allies in government and media—who knows, maybe Kara Swisher—would advocate for government shutting down Ben Shapiro. That would be a win. The strategy is clear: find the edgiest thing he has said, label it "hate speech," and use it to argue that he poses a danger to others on the platform, so he should be deplatformed. Or just make an example of a few others like him. That might be enough for the much-desired chilling effect.

Even if you were to come out with an admirably clear and limited definition of "hate speech," which does not include mainstream conservatives and which would include some "hateful," extreme left-wing speech, that wouldn't help much. If the government adopted such "reasonable" regulations, it would be cold comfort. Once the cow has left the barn, once any hate speech law is passed, it's all too easy for someone to make subtle redefinitions of key terms to allow for viewpoint censorship. Then it's only a matter of time.

It's sad that it has come to this—that one of the most powerful Americans in the world suggests that we use the awesome power of law and government to regulate speech, to shut down "hate speech," a fundamentally obscure weasel word that can, ultimately, be used to shut down any speech we dislike—which after all is why the word is used. It's sad not only that this is what he has suggested, but that I have to point it out, and that it seems transgressive to, well, defend free speech. But very well then, I'll be transgressive; I'd say that those who agree with me now have an obligation to be transgressive in just this way.

We can only hope that, with Facebook executives heading for the exits and Facebook widely criticized, Zuckerberg's entirely wrongheaded call for (more) censorship will be ignored by federal and state governments. Don't count on it, though.

But maybe, censorship should be privatized

Facebook is also, Zuckerberg says, "creating an independent body so people can appeal our decisions." This is probably a legal ploy to avoid taking responsibility for censorship decisions, which would make it possible to regulate Facebook as a publisher, not just a platform. Of course, if the DMCA were replaced by some new regulatory framework, then Facebook might not have to give up control, because under the new framework, viewpoint censorship might not make them into publishers.

Of course, whether in the hands of a super-powerful central committee such as Zuckerberg is building, a giant corporation, or the government, we can expect censorship decisions to be highly politicized, to create an elite of censors and rank-and-file thought police to keep us plebs in line. Just imagine if all of the many conservative pages and individuals temporarily blocked or permanently banned by Facebook had to satisfy some third party tribunal.

One idea is for third-party bodies [i.e., not just one for Facebook] to set standards governing the distribution of harmful content and measure companies against those standards. Regulation could set baselines for what's prohibited and require companies to build systems for keeping harmful content to a bare minimum.

Facebook already publishes transparency reports on how effectively we're removing harmful content. I believe every major Internet service should do this quarterly, because it's just as important as financial reporting. Once we understand the prevalence of harmful content, we can see which companies are improving and where we should set the baselines.

There's a word for such "third-party bodies": censors.

The wording is stunning. He's concerned about "the distribution" of content and wants judged "measured" against some "standards." He wants content he disapproves of not just blocked, but kept to a "bare minimum." He wants to be "effective" in "removing harmful content." He really wants to "understand the prevalence of harmful content."

This is not the language that someone who genuinely cares about "the freedom for people to express themselves" would use.

3. The rest of the document

I'm going to cover the rest of the document much more briefly, because it's less important.

Zuckerberg favors regulations to create "common standards for verifying political actors," i.e., if you want to engage in political activity, you'll have to register with Facebook. This is all very vague, though. What behavior, exactly, is going to be caught in the net that's being weaved here? Zuckerberg worries that "divisive political issues" are the target of "attempted interference." Well, yes—well spotted there, political issues sure can be divisive! But it isn't their divisiveness that Facebook or other platforms should try to regulate; it is the "interference" by foreign government actors. What that means precisely, I really wonder.

Zuckerberg's third point is that we need a "globally harmonized framework" for "effective privacy and data protection." Well, that's music to my ears. But it's certainly rich, the very notion that the world's biggest violator of privacy, indeed the guy whose violations are perhaps the single biggest cause of widespread concern about privacy, wants privacy rights protected.

He wants privacy rights protected the way he wants free speech protected. I wouldn't believe him.

Zuckerberg's final point is another that you might think would make me happy: "regulation should guarantee the principle of data portability."

Well. No. Code should guarantee data portability. Regulation shouldn't guarantee any such thing. I don't trust governments, in the pockets of "experts" in the pay of giant corporations, to settle the rules according to which data is "portable." They might, just for instance, write the rules in such a way that gives governments a back door into what should be entirely private data.

Beware social media giants bearing gifts.

And portability, while nice, is not the point. Of course Zuckerberg is OK with the portability of data, i.e., allowing people to more easily move it from one vendor to another. But that's a technical detail of convenience. What matters, rather, is whether I own my data and serve it myself to my subscribers, according to rules that I and they mutually agree on.

But that is something that Zuckerberg specifically can't agree to, because he's already told you that he wants "hate speech and more" to be regulated. By the government or by third party censors.

You can't have it both ways, Zuckerberg. Which is it going to be: data ownership that protects unfettered free speech, or censorship that ultimately forbids data ownership?


How to decentralize social media—a brief sketch

The problem about social media is that it is centralized. Centralization empowers massive corporations and governments to steal our privacy and restrict our speech and autonomy.

What should exist are neutral, technical standards and protocols, like the standards and protocols for blogs, email, and the Web. Indeed, many proposed standards already do exist, but none has emerged as a common, dominant standard. Blockchain technology—the technology of decentralization—is perfect for this, but not strictly necessary. Common protocols would enable us to follow public feeds no matter where they are published. We would eventually have our pick of many different apps to view these feeds. We would choose our own terms, not Facebook's or Twitter's, for both publishing and reading.

As things are, if you want to make short public posts to the greatest number of people, you have to go to Twitter, enriching them and letting them monetize your content (and your privacy). Similarly, if you want to make it easy for friends and family to follow your more personal text and other media, you have to go to Facebook. Similarly for various other kinds of content. It just doesn't have to be that way. We could decentralize.

This is a nice dream. But how do we make it happen?

After all, the problem about replacing the giant, abusive social media companies is that you can't replace existing technology without making something so much more awesome that everyone will rush to try it. And the social media giants have zillions of the best programmers in the world. How can we, the little guys, possibly compete?

Well, I've thought of a way the open source software and blockchain communities might actually kick the legs out from under the social media giants. My proposal (briefly sketched) has five parts. The killer feature, which will bring down the giants, is (4):

  1. The open data standards. Create open data standards and protocols, or probably just adopt the best of already-existing ones, for the feeds of posts (and threads, and other data structures) that Twitter, Facebook, etc., uses. I'm not the first to have thought of this; the W3C has worked on the problem. It'd be like RSS, but for various kinds of social media post types.
  2. The publishing/storage platforms. Create reliable ways for people to publish, store, and encrypt (and keep totally secret, if they want) their posts. Such platforms would allow users to control exactly who has access to what content they want to broadcast to the world, and in what form, and they would not have to ask permission from anyone and would not be censorable. (Blockchain companies using IPFS, and in particular Everipedia, could help here and show the way; but any website could publish feeds.)
  3. The feed readers. Just as the RSS standard spawned lots of "reader" and "aggregator" software, so there should be similar feed readers for the various data standards described in (1) and the publishers described in (2). While publishers might have built-in readers (as the social media giants all do), the publishing and reading feature sets need to be kept independent, if you want a completely decentralized system.
  4. The social media browser plugins. Here's the killer feature. Create at least one (could be many competing) browser plugins that enable you to (a) select feeds and then (b) display them alongside a user's Twitter, Facebook, etc., feeds. (This could be an adaptation of Greasemonkey.) In other words, once this feature were available, you could tell your friends: "I'm not on Twitter. But if you want to see my Tweet-like posts appear in your Twitter feed, then simply install this plugin and input my feed address. You'll see my posts pop up just as if they were on Twitter. But they're not! And we can do this because you can control how any website appears to you from your own browser. It's totally legal and it's actually a really good idea." In this way, while you might never look at Twitter or Facebook, you can stay in contact with your friends who are still there—but on your own terms.
  5. The social media feed exporters/APIs. Create easy-to-use software that enables people to publish their Twitter, Facebook, Mastodon, Diaspora, Gab, Minds, etc., feeds via the open data standards. The big social media companies already have APIs, and some of the smaller companies and open projects have standards, but there is no single, common open data standard that everyone uses. That needs to change. If you could publish your Twitter data in terms of such a standard, that would be awesome. Then you could tell your friends: "I'm on Twitter, but I know you're not. You don't have to miss out on my tweets. Just use a tweet reader of your choice (you know—like an old blog/RSS feed reader, but for tweets) and subscribe to my username!

The one-two punch here is the combination of points (1) and (4): First, we get behind decentralized, common social media standards and protocols, and then we use those standards when building plugins that let our friends, who are still using Facebook and Twitter (etc.), see posts that we put on websites like Steemit, Minds, Gab, and Bitchute (not to mention coming Everipedia Network dapps).

The exciting thing about this plan is that no critical mass seems to be needed in order to get people to install the envisioned plugin. All you need is one friend whose short posts you want to see in your Twitter feed, and you might install a plugin that lets you do that. As more and more people do this, there should be a snowball effect. Thus, even a relatively small amount of adoption should create a movement toward decentralization. And then the days of centralized social media will be numbered. We'll look back on the early days of Facebook and Twitter (and YouTube!) as we now do the Robber Barons.

We can look at a later iteration of Everipedia itself as an example. Right now, there is one centralized encyclopedia: Wikipedia. With the Everipedia Network, there will be a protocol that will enable people from all over the web to participate in a much broader project.

I would love to see the various competitors of the social media giants settle on a common standard and otherwise join forces on these sorts of projects. If they do, it will happen, and the days of privacy-stealing, centralized, controlling, Big Brother social media will soon be behind us. We'll return to the superior and individually empowering spirit of the original Internet.

We have to do this, people. This is the future of the Internet. Even if you've given up social media, we should build this for our friends and family who are still toiling in the digital plantations.


My Facebook #DeletionDay goodbye message

Here's what I posted as my last long message to Facebook.


Folks, as previously announced, tomorrow will be my #DeletionDay for Facebook. It'll be the last day I'll post here, and I'll begin the process for the permanent removal of my account. (Among other things, I'll make a copy of my data and my friends list.) I'm sorry to those who want me to stay, but there are too many reasons to quit.

Let me explain again, more tersely, why I'm quitting.

You probably already know that I think this kind of social media, as fun as it undoubtedly can be, undermines relationships, wastes our time, and distracts us. I also agree, as one guy can be seen saying on virally-shared videos, that social media is particularly bad for kids. All I can say is, it's just sad that all that hasn't been enough for me (and most of us) to quit.

But in 2018, it became all too clear that Big Tech—which is now most definitely a thing—is cynically and strongly committed to using social media as a potent tool of political control, which it certainly is. They like having that power. For companies like Google, Facebook, and Apple, reining in wrongthink is a moral imperative. And they're doing the bidding of the Establishment when they do so. It's very scary, I think.

The only thing that gives them this awesome power over us and our free, voluntary conversations is that we have given them that power. But notice the thing that empowers them: we give them our data to manage. It's not really ours. They take it, sell it to advertisers, repackage it, and show it back to us in ways they control. And they can silence us if they like. That's because we have sold our privacy to them for convenience and fun. We're all what Nick Carr aptly called "digital sharecroppers." I now think it's a terrible deal. It's still voluntary, thank goodness; so I'm opting out.

Another thing is that I started reading a book called Cybersecurity for Beginners (no, I'm not too proud to read a book called that) by Raef Meeuwisse, after my phone (and Google account and Coinbase) were hacked. This finally opened my eyes to the very close connection between privacy and security. Meeuwisse explains that information security has become much more complex than it was in the past, what with multiple logins, multiple (interconnected) devices, multiple (interconnected) cloud services, and in short multiple potential points of failure in multiple layers.

[Adding now: Someone recommended, and I bought and started reading, another good privacy book called The Art of Invisibility by Kevin Mitnick. Mitnick is a famous hacker. Meeuwisse is a security professional as well. The Mitnick book is much more readable for savvy Internet users, while the Meeuwisse book is a bit drier and might be more of a good introduction to the field of information security for managers.]

The root cause of the increased security risks, as I see it (as Meeuwisse helped me to see), is our tendency to trust our data to more and more centralizing organizations (like Facebook, Microsoft, and Apple). This means we trust them not only to control our data to our benefit, but also to get security right. But they can't be expected to get security right precisely because social media and cloud services depend on their ability to access our data. If you want robust security, you must demand absolute privacy. That means that only you own and control your data.

If we were the gatekeepers of our own data (if it were delivered out of our own clouds, via decentralized feeds we control, as open source software and blockchains support), then we wouldn't have nearly so many problems.

Maybe even more fundamental is that there are significant risks—personal, social, and political—to letting corporations (or governments) collectivize us. But precisely that is what has been going on over the last ten years or so.

It's time for us to work a new technological revolution and decentralize, or decollectivize, ourselves. One reason I love working for a blockchain company is that we're philosophically committed to the idea of decentralization, of personal autonomy. But it's still early days for both open source software and blockchain. Much remains to be done to make this technology usable to grandma.

While we're waiting for viable (usable) new solutions, I think the first step is to lock down your cyber-life and help create demand by just getting rid of things like Facebook. You don't have to completely unplug from everything; you have to be hardcore or extreme about your privacy (although I think that's a good idea). You can do what you can, what you're able to do.

I won't blame or think ill of you if you stay on Facebook. I'm just trying to explain why I'm leaving. And I guess I am encouraging you to really start boning up on digital hygiene.

Below, I'm going to link to a series of relevant blog posts that you can explore if you want to follow me out, or just to start thinking more about this stuff.

Also, I hope you'll subscribe yourself to my personal mailing list, which I'll start using more regularly tomorrow. By the way, if you might be interested in some other, more specialized list that I might start based on my interests (such as Everipedia, education, libertarianism, or whatever), please join the big list.

Also note, especially if your email is from Gmail, you will have to check your spam folder for the confirmation mail, if you want to be added. Please move any mails from me and my list out of your spam (or junk) folder into your inbox so Google learns I'm actually not a spammer. :-)


There, that's me being "terse."


How deep should one go into this privacy stuff, anyway?

Probably deeper than you thought. Here's why.

If you are convinced that privacy actually matters, and you really want to lock down your cyber-life, as I am trying to do, there are easy options, like switching to Brave (or Firefox with plugins that harden it for privacy). I've done that. Then there are more challenging but doable options, like switching your email away from Gmail. I've done that. Then there are the hardcore options, like permanently quitting Facebook. I will be doing that later this month.

And then, finally, there are some extreme, weird, bizarre, and even self-destructive options, like completely unplugging—or, less extremely, plunking down significant sums of money on privacy hardware that may or may not work—or that works, but costs a lot. As an illustrative example, we can think about the wonderfully well-meaning company Purism and its charmingly privacy-obsessed products, the Librem 13 and 15 laptops as well as the Librem 5 phone, which is due in April "Q3".

I'm going to use this as an example of the hardcore level, then I'm going to go back to the more interesting broader questions. You can skip the next section if it totally bores you.

Should I take financial risks to support the cause of privacy?

If I sound a little skeptical, it's because I am. Purism is a good example because, on the one hand, it's totally devoted to privacy and 100% open source (OSS), concepts that I love. (By the way, I have absolutely no relationship with them. I haven't even purchased one of their products yet.) Privacy and open source go together like hand in glove, by the way, because developers of OSS avoid adding privacy-violating features. OSS developers tend to be privacy fiends, not least because free software projects offer few incentives to sell your data, while having many incentives to keep it secure. But, as much as I love open source software (like Linux, Ubuntu, Apache, and LibreOffice, to take a few examples) and open content (like Wikipedia and Everipedia), not to mention the promise of open hardware, the quality of such open and free projects can be uneven.

The well-known lack of polish on OSS is mainly because whether a coding or editorial problem is fixed depends on self-directed volunteers. It often helps when a for-profit enterprise gets involved to push things forward decisively (like Everipedia redesigning wiki software and putting Wikipedia's content on the blockchain). Similarly, to be sure, we wouldn't have a prayer of seeing a mass-produced Linux phone without companies like Purism. The company behind Ubuntu, Canonical, tried and failed to make an Ubuntu phone. If they had succeeded, I might own one now.

So there is an interesting dilemma here, I think. On the one hand, I want to support companies like Purism, because they're doing really important work. The world desperately needs a choice other than Apple and Android, and not just any other choice—a choice that respects our privacy and autonomy (or, as the OSS community likes to say, our freedom). On the other hand, if you want to use a Linux phone daily for mission-critical business stuff, then the Librem 5 phone isn't quite ready for you yet.

My point here isn't about the phone (but I do hope they succeed). My point is that our world in 2019 is not made for privacy. You have to change your habits significantly, switch vendors and accounts, accept new expenses, and maybe even take some risks, if you go beyond "hardcore" levels of privacy.

Is it worth it? Maybe you think being even just "hardcore" about privacy isn't worth it. How deep should one go into this privacy stuff, anyway? In the rest of this post, I'll explore this timely issue.

The four levels

I've already written in this blog about why privacy is important. But what I haven't explored is the question of how important it is. It's very important, to be sure, but you can make changes that are more or less difficult. What level of difficulty should you accept: easy, challenging, hardcore, or extreme?

Each of these levels of difficulty, I think, naturally goes with a certain attitude toward privacy. What level are you at now? Have a look:

  1. The easy level. You want to make it a bit harder for hackers to do damage to your devices, your data, your reputation, or your credit. The idea here is that just as it would be irresponsible to leave your door unlocked if you live in a crime-ridden neighborhood, it's irresponsible to use weak passwords and other such things. You'll install a firewall (or, rather, let commercial software do this for you) and virus protection software.—If you stop there, you really don't care if corporations or the government spies on you, at the end of the day. Targeted ads might be annoying, but they're tolerable, you think, and you have nothing to hide from the government. This level is better than nothing, but it's also quite irresponsible, in my opinion. Most people are at this level (at best). The fact that this attitude is so widespread is what has allowed corporations, governments, and criminals to get their claws into us.
  2. The challenging but doable level. You understand that hackers can actually ruin your life, and, in scary, unpredictable circumstances, a rogue corporation or a government could, as well. As unlikely as this might be, we are right to take extra precautions to avoid the worst. Corporate and government intrusions into privacy royally piss you off, and you're ready to do something reasonably dramatic (such as switch away from Gmail), to send a message and make yourself feel better. But you know you'll never wholly escape the clutches of your evil corporate and government overlords. You don't like this at all, but you're "realistic"; you can't escape the system, and you're mostly resigned to it. You just want the real abusers held to account. Maybe government regulation is the solution.—This level is better than nothing. This is the level of the Establishment types who want the government to "do something" about Facebooks abuses, but are only a little bothered by the NSA. I think this level is still irresponsible. If you're ultimately OK with sending your data to Google and Facebook, and you trust the NSA, you're still one of the sheeple who are allowing them to take over the world.
  3. The hardcore level. Now things get interesting. Your eyes have been opened. You know Google and Facebook aren't going to stop. Why would they? They like being social engineers. They want to control who you vote for. They're unapologetic about inserting you and your data into a vast corporate machine. Similarly, you know that governments will collect more of your data in the future, not less, and sooner or later, some of those governments will use the data for truly scary and oppressive social control, just as China is doing. If you're at this level, it's not just because you want to protect your data from criminals. It's because you firmly believe that technology has developed especially over the last 15 years without sufficient privacy controls built in. You demand that those controls be built in now, because otherwise, huge corporations and the largest, most powerful governments in history can monitor us 24/7, wherever we are. This can't end well. We need to completely change the Internet and how it operates.—The hardcore level is not just political, it's fundamentally opposed to the systems that have developed. This is why you won't just complain about Facebook, you'll quit Facebook, because you know that if you don't, you're participating in what what is, in the end, a simply evil system. In other ways, you're ready to lock down your cyber-life systematically. You know what a VPN is and you use one. You would laugh at the idea of using Dropbox. You know you'll have to work pretty hard at this. It's only a matter of how much you can accomplish.
  4. The extreme level. The hardcore level isn't hardcore enough. Of course corporations and governments are using your data to monitor and control you in a thousand big and small ways. This is one of the most important problems of our time. You will go out of your way, on principle and so that you can help advance the technology, to help lock down everybody's data. Of course you use Linux. Probably, you're a computer programmer or some other techie, so you can figure out how to make the bleeding edge privacy software and hardware work. Maybe you help develop it.—The extreme level is beyond merely political. It's not just one cause among many. You live with tech all the time and you demand that every bit of your tech respect your privacy and autonomy; that should be the default mode. You've tried and maybe use several VPNs. You run your own servers for privacy purposes. You use precious little proprietary software, which you find positively offensive. You're already doing everything you can to make that how you interact with technology.

In sum, privacy is can be viewed primarily as a matter of personal safety with no big demands on your time, as a political side-issue that demands only a little of your time, as an important political principle that places fairly serious demands on your time, or as a political principle that is so important that it guides all of your technical choices.

What should be your level of privacy commitment?

Let's get clear, now. I, for example, have made quite a few changes that show something like hardcore commitment. I switched to Linux, replaced Gmail, Chrome, and Google Search, and am mostly quitting privacy-invasive social media. I even use a VPN. The reason I'm making these changes isn't that I feel personally threatened by Microsoft, Apple, Google, and Facebook. It's not about me and my data; I'm not paranoid. It's about a much bigger, systemic threat. It's a threat to all of us, because we have given so much power to corporations and governments in the form of easily collectible data that they control. It really is true that knowledge is power, and that is why these organizations are learning as much about us as they can.

There's more to it than that. If you're not willing to go beyond moderately challenging changes, you're probably saying, "But Larry, why should I be so passionate about...data? Isn't that kind of, you know, wonky and weird? Seems like a waste of time."

Look. The digital giants in both the private and public sectors are not just collecting our data. By collecting our data, they're collectivizing us. If you want to understand the problem, think about that. Maybe you hate how stuff you talked about on Facebook or Gmail, or that you searched for on Google or Amazon, suddenly seem to be reflected by weirdly appropriate ads everywhere. Advertisers and Big Tech are, naturally, trying to influence you; they're able to do so because you've agreed to give your data to companies that aggregate it and sell it to advertisers. Maybe you think Russia was able to influence U.S. elections. How would that have been possible, if a huge percentage of the American public were not part of one centralized system, Facebook? Maybe you think Facebook, YouTube, Twitter, and others are outrageously biased and are censoring people for their politics. That's possible only because we've let those companies manage our data, and we must use their proprietary protocols if we want to use it. Maybe you're concerned about China hacking and crippling U.S. computers. A big part of the problem is that good security practices have been undermined by lax privacy practices.

In every case, the problem ultimately is we don't care enough about privacy. We've been far too willing to place control of our data in the hands of the tech giants who are only too happy to take it off our hands, in exchange for "services."

Oh, we're serviced, all right.

In these and many, many more cases, the root problem is that we don't hold the keys—they do. Our obligation, therefore, is to take back the keys.

Fortunately, we are still able to. We can create demand for better systems that respect our privacy. We don't have to use Facebook, for example. We can leave en masse, creating a demand for a decentralized system where we each own and control how our data is distributed, and the terms on which we see other people's data. We don't have to leave these important decisions in the hands of creeps like Mark Zuckerberg. We can use email, mailing lists, and newer, more privacy-respecting platforms.

To take another example, we don't have to use Microsoft or Apple to run our computers. While Apple is probably better, it's still bad; it still places many important decisions in the hands of one giant, powerful company, that will ultimately control (and pass along) our data under confusing terms that we must agree to if we are to use their products. Because their software is proprietary and closed-source, when we use their hardware and services, we simply have to trust that what happens to it after we submit it will be managed to our benefit.

Instead of these top-down, controlling systems, we could be using Linux, which is much, much better than it was 15 years ago.

By the way, here's something that ought to piss you off: smart phones are the one essential 21st-century technology where you have no free, privacy-respecting option. It's Apple or Google (or Microsoft, with its moribund Windows Phone). There still isn't a Linux phone. So wish Purism luck!

We all have different political principles and priorities, of course. I personally am not sure where privacy stacks up, precisely, against the many, many other principles there are.

One thing is very clear to me: privacy is surprisingly important, and more important than most people think it is. It isn't yet another special, narrow issue like euthanasia, gun control, or the national debt. It is broader than those. Its conceptual cousins are broad principles like freedom and justice. This is because privacy touches every aspect of information. Digital information has increasingly become, in the last 30 years, the very lifeblood of so much of our modern existence: commerce, socialization, politics, education, entertainment, and more. Whoever controls these things controls the world.

That, then, is the point. We should care about privacy a lot—we should be hardcore if not extreme about it—because we care about who controls us, and we want to retain control over ourselves. If you want to remain a democracy, if you don't want society itself to become an appendage of massive corporate and government mechanisms, by far the most powerful institutions in history, then you need to start caring about privacy. That's how important it is.

Privacy doesn't mainly have to do with hiding our dirty secrets from neighbors and the law. It mainly has to do with whether we must ask anyone's permission to communicate, publish, support, oppose, purchase, compensate, save, retrieve, and more. It also has to do with whether we control the conditions under which others can access our information, including information about us. Do we dictate the terms under which others can use all this information that makes up so much of life today, or does some central authority do that for us?

Whoever controls our information controls those parts of our lives that are touched by information. The more of our information is in their hands, the more control they have over us. It's not about secrecy; it's about autonomy.


Part of a series on how I'm locking down my cyber-life.