Why does information privacy matter, again?

It's not just because you are a criminal and the coppers might catch you. Or because you really, really hate big corporations who just want to sell you stuff more easily. Or because you're paranoid.

If that's as far as your thinking goes, when people start talking about "privacy" on the Internet, you really need to bone up on the subject.

You probably already knew that you don't have to be criminal, paranoid, or anti-capitalist to be very jealous of your Internet privacy rights. After all, plenty of law-abiding, merely sensibly cautious, capitalism-loving people are freaking out about the way FAANG (Facebook, Apple, Amazon, Netflix, Google) companies, and many more, are creepily tracking their every move. Then those same corporations are selling the information and making it available to governments (or, at least, not going out of their way to stop governments from getting it).

Are people right to freak out about these privacy violations?

Yes, they are, or so I will argue. The threats come under three heads: corporate, criminal, and government. And let's not forget that in the worst-case scenario, the three heads merge into one.

The corporate threat

Left unchecked, in ten years, some of the biggest, most influential corporations will know (or have ready access to) not just your name, email address, phone number, age, sex/gender, credit card numbers, family relationships, friends, mother's maiden name, first car, favorite food, various social media metrics, browsing history, purchase history, as well as a large collection of content authored and curated by you. That's already bad enough (for reasons I'll explain). But they might add to their dossiers on you such things as your social security number, credit score, criminal record, medical history, voting history, religion, political party, government benefits, and more.

But how? Well, you might have asked that about the first list twenty years ago. How indeed? They'll create must-have devices and services that become very popular. Everybody has to have the device, or the service. Then they'll talk a good game when it comes to your information privacy and security, but they'll get their hands on your medical history, your credit score, your government benefits--and that will be it.

Imagine, too, the possibilities that highly motivated project managers will dream up when they can mash up your growing dossier with data from facial recognition, AI/big data text analysis, and other new technologies.

In such a situation, what information isn't private?

"But I can make up my own mind about what to buy," you say.

Well. Top-flight marketing and product people are, naturally, very good at what they do. It's not an accident that, once everybody and his grandma got online, some of the wretched Mark Zuckerbergs of the world would stumble on some platform that would connect us by our personal relationships, not care one bit about privacy, and hire people who are and become very, very, very good at manipulating us in all sorts of ways. They'll keep us online, give us more reasons to share more information, watch ads, and yes, buy stuff.

But corporate control of your private life is much more insidious than that.

Do you feel quite yourself when you're reading and posting on Facebook and Twitter, shopping on Amazon, watching and commenting on YouTube and Netflix, etc.? I admit it: I don't. We become more irrational when we get on these social networks. Sure, we retain our free will. We can stop ourselves (but often won't). We are the authors of what we write (as influenced by our echo chambers), which reflects our real views (maybe). We could quit (fat chance).

We have become part of a machine, run by massively powerful corporations, with their clever executives at the levers. Only part of what is so offensive about this machine is that we are influenced to buy things we don't need. What about radicalization--being influenced to believe things we haven't thought sufficiently about? What about self-censorship, because the increasingly bold and shameless social media censors (no longer mere "moderators") increasingly require ideological purity? What about the failure to consider options (for shopping, entertainment, socialization, discussion, etc.) that are outside of our preferred, addictive networks?

More importantly perhaps than any of those, what about the opportunity cost of spending our lives coordinated by these networks, with less time for offline creativity, meaningful one-on-one interaction, exercise, focused hard work, self-awareness, and self-doubt?

The machine, in short, robs us of our autonomy. As soon as we started giving up every little bit of information that makes us unique individuals, we empowered executives and technologists to collectivize us. It is not too much of a stretch to call it the beginnings of an engine of totalitarianism.

The criminal threat: privacy means security

If you've never had your credit card charged for stuff you didn't buy, your phone hacked, precious files held hostage by ransomware, your computer made inoperable by a virus, or your identity stolen, then you might not see what the fuss with criminal hackers was about. Several of these things have happened to me, and since I started studying programming and information security, I've become increasingly aware of just how extensive the dangers are.

Here's the relevance to privacy: keeping your information private requires keeping it secure. Privacy and security go hand in hand. If your information isn't private, that means it's not secure, i.e., anybody can easily grab it. You have to think about security if you want to think about privacy.

So, even if you (wrongheadedly) trust the Internet giants not to abuse your information or rob you of your autonomy, you should still consider that you're trusting them with your information security. If a company has your credit card information, government ID number, medical history and health data, or candid opinions, you have to ask yourself: Am I really comfortable with these companies' confident guarantees that my information won't fall into the wrong hands?

If you are, you shouldn't be. Think of all the data hacking of systems that, you might have thought, were surely hacker-proof: giant retailers like Target, internet giants like Facebook, major political parties, and heck, the NSA itself (not just the hack by Snowden).

No, your credit card info is not guaranteed safe just because the corporation holding makes billions a year.

If you want to keep your information safe from malevolent forces, you shouldn't trust big companies. There are all sorts of ways bad actors can get hold of your information for nefarious purposes. They don't even always have to hack it. Sometimes, they can just legally buy it, a problem that legislation can make better--or worse.

The government threat

Remember when Edward Snowden revealed that the NSA has a (once) secret spy program that actually empowers it to monitor all telephone calls, emails, browser and search histories, and social media use? Remember when we all were shocked to learn that Bush and Obama, Democrats and Republicans had together created a monster of a domestic surveillance program?

I do. I think about it fairly often, although one doesn't hear about it that much, and the programs Edward Snowden uncovered, like NSA's PRISM, have not been canceled. That means (a) everything you do and access online can be put in government hands, whenever they demand it, and (b) it's no more secure than the NSA's security.

Remember when everybody left social media in droves and started locking down their Internet use, because otherwise the NSA would have easy access to their every move?

No, I don't remember that either, because it didn't happen. Nor, sadly, was there a popular revolt to get these programs repealed. I think many of us couldn't really believe it was happening; it just didn't seem real, it seemed to be about terrorists and spies and criminals, without any impact on us.

One thing that bothers me quite a bit is that, while pretty much the whole Democratic Party thinks Donald Trump is a crypto-Nazi and is one step from instituting fascism. Still, puzzlingly, nobody thinks to complain that he's in control of the NSA and can trump up excuses to spy on us if he wishes. In other words, if Trump were a fascist and he did turn out to want to start the Fifth Reich here in the good ol' U.S. of A., it doesn't seem to bother many people that previous presidents and Congresses have given him handy tools to do just that.

Meanwhile, Republicans often think the Democratic Party is beholden to social justice warriors that want to institute socialism, thought policing, censorship, and general totalitarianism. You know--fascism. Both sides think the other side is just desperate to lord it over us, the innocent, good salt of the earth. But nobody seems to care that the very tools that make a police state worse than 1984 possible are already in place. And they're only too happy to keep building and rewarding a corporate system that feeds directly into the NSA.

It'll never happen here! We can keep putting our entire lives in the hands of giant corporations! So say the people whose direst fear is that the other side will consolidates more power and starts executing their secret desires to institute fascist control.

What to do

But it can happen here. That's why we need to start demanding more privacy from government.

If you're really worried about fascism, then let's defang the monster. Complain more about government programs that systematically violate your privacy rights. After all, knowledge is power, so NSA's PRISM program, and similar surveillance programs in other countries, is really just an undemocratic power grab. With enough of a public uproar, Democrats and Republicans really could get together over what should be a bipartisan concern: shutting down these enormously powerful, secretive government programs.

In the meantime, we need to wake up about our personal privacy.

Look--everything you do online has multiple points of insecurity. If you can see that now, then what's your response? Hope for the best? Throw your hands up in despair? Do nothing? Figure that decent people will eventually "do something" about the problem for you?

Don't count on it. If you aren't ready to start acting on your own behalf, why think your neighbor or your representative will?

Stop giving boatloads of information to giant corporations, especially ones who think you are the product, and contribute to the market for genuinely privacy-respecting products and services. If you don't, you're opening up that information to hackers who will exploit those points of security, and making it easier for governments everywhere to control their people.

Do your personal, familial, and civic duty and start locking down your cyber-life. I am. It'll take some time. But I think it's worth it and, soon, I'll be finished getting everything set up.

What if you and all your family and friends did this? If there were a groundswell of demand for privacy, we might create tools, practices, education, and economies that support privacy properly.

Think of it as cyber-hygiene. You need to wash your data regularly. It's time to learn. Our swinish data habits are really starting to stink the place up, and it's making the executives, criminals, and tyrants think they can rule the sty.


Stop giving your information away carelessly!

27 tips for improving your cyber-hygiene

Who is most responsible for your online privacy being violated?

You are.

Privacy is one of the biggest concerns in tech news recently. The importance of personal privacy is something everybody seems to be able to agree on. But if you're concerned about privacy, then you need stop giving your information away willy-nilly. Because you probably are.

Well, maybe you are. See how many of the following best practices you already follow.

  1. Passwords. Install and learn how to use a password manager on all your devices. There are many fine ones on the market.
  2. Let your password manager generate your passwords for you. You never even need to know what your passwords are, once you've got the password managers set up.
  3. Make sure you make a secure password for the password manager!
  4. Stop letting your browser save passwords. Your password manager handles that.
  5. If ever you have reason to send a password to another person online, break it into two or more files (texts, emails, whatever) in different media, then totally delete those files. Also, some password managers help with this.
  6. Credit cards and other personal info. Stop letting your browser save your credit cards. Your password manager handles that.
  7. Stop letting web vendors save your credit card info on their servers, unless absolutely necessary (e.g., for subscriptions). Again, your password manager handles that. Maybe you should go delete them now. I'll wait.
  8. If you give your credit card info out online, always check that the website has the "lock" next to its address on the address bar. That means it uses the https protocol (i.e., uses encryption).
  9. Stop answering "additional security" questions with correct answers, especially correct answers that hackers might discover with research. Treat the answer fields as passwords, and record them in your password manager.
  10. Stop filling out the "optional" information on account registration forms. Give away only the required information.
  11. Americans, for chrissakes stop giving out your social security number and allowing others to use it as an ID, unless absolutely required.
  12. Stop giving your email address out when doing face-to-face purchases. Those companies don't actually need it.
  13. Stop trusting the Internet giants with your data. Consider moving away from Gmail. Google has admitted it reads your mail—all the better to market to you, my dear. Gmail isn't all that, really.
  14. Maintain your own calendar. When meeting, let others add your name, but don't let them add your email address, if you have a choice.
  15. Maintain your own contacts. No need to let one of the Internet giants take control of that for you. It's not that hard. Then have them delete their copies.
  16. If you're an Apple person, stop using iCloud to sync your devices. Use wi-fi instead.
  17. Browser and search engine hygiene. Use a privacy-respecting browser, such as Brave or Firefox. (This will stop your browsing activity from being needlessly shared with Google or Microsoft.)
  18. If you must use a browser without built-in tracking protection (like Chrome), then use a tracker-blocking extension (like Privacy Badger).
  19. Use a privacy-respecting search engine, such as DuckDuckGo or Qwant. (Ditto.)
  20. Social media, if you must. On social media, start learning and taking the privacy settings more seriously. There are many options that allow you to lock down your data to some degree.
  21. Make posts "private" on Facebook, especially if they have any personal details. If you didn't know the difference between "private" and "public" posts, learn this. And a friend says: "Stop playing Facebook quizzes."
  22. Stop digitally labeling your photos and other social posts with time and location. Make sure that data is removed before you post.
    (Putting it in the text description is better.)
  23. For crying out loud, stop posting totally public pictures of your vacation while you are vacation. Those pictures are very interesting to burglars. Wait until you get home, at least.
  24. Sorry, but stop sharing pictures of your children on social. (This is just my opinion. I know you might differ. But it makes me nervous.)
  25. Consider quitting social media altogether. Their business models are extremely hostile to privacy. You (and your private info) are the product, after all.
  26. A couple of obvious(?) last items. Make sure you're using a firewall and some sort of anti-virus software.
  27. Don't be the idiot who opens email attachments from strangers.

How many did you answer "I do that!" to? I scored 22, to be totally honest, but it'll be up to 27 soon. Answer below. Well, answer only if you have a high score, or if you use a pseudonym. I don't want hackers to know who they can hit up for an easy win!


Kick the tech giants out of your life

If you're like me, you feel a need to need to kick the tech giants out of your life. But how? Well, nobody said it would be easy, but I'm actually doing it!

Stop using Google Chrome. Google is contemptuous of your privacy and of free speech. I recommend Brave.

Stop using Google Search. And it tracks you after you search. I recommend DuckDuckGo, with results just as good as Google's 90+% of the time, in my experience.

Stop using Gmail. Look. Gmail is way overrated. And there are many, many other options out there which do not read your mail and extract marketable data.

Stop using Google Contacts and iCloud. Start managing your own contacts and data. There are lots of great tools to do this; it's not that hard.

Shields up on all the tech giants' websites and devices. Dive in to the innards of your settings (or options)—not just a few, all of them, because they like to hide things—and set your privacy settings to max.

Maybe quit social media. Facebook, Twitter, YouTube, and others have becoming increasingly censorious and contemptuous of your privacy. Make them less relevant by spending more time elsewhere, if you can't just quit for good.

Use a password manager. Stop letting your browser track your passwords.

And then, if you want to get serious:

Start learning Linux... Microsoft's problems with privacy and security are famous. Apple has its own too. Well, there are these things called "virtual machines" which make it easy (and free) to install and play with your very own Linux installation. Try it!

...then switch to Linux. If you know how to use Linux, why not make the switch to something more permanent? You can always dual-boot.


How I locked down my passwords

If you’re one of those people who uses the same password for everything, especially if it’s a simple password, you’re a fool and you need to stop. But if you’re going to maintain a zillion different passwords for a zillion different sites, how? Password management software. I’ve been using the free, open source KeePass, which is secure and it works, but it doesn’t integrate well with browsers, or let me save my password date securely in the cloud (or maybe better, on the blockchain). So I’m going to get a better password manager and set it up on all my devices. This is an essential to locking down my cyber-life. One of the ways Facebook, LinkedIn, et al. insinuate themselves in our cyber-lives is by giving us an easy way to log in to other sites. But that makes it easier for them to track us everywhere. Well, if you install a decent password manager, then you don’t have to depend on social login services. Just skip them and use the omnipresent “log in with email” option every time. Your password manager will make it even easier than social login systems did.

You need a password manager

Password management software securely holds your passwords and brings them out, also securely, when you're logging in to websites in your desktop and handheld browsers. Decent browsers (like Brave) make your passwords available for the same purposes, if you let them, but there are strong reasons you shouldn't rely on your browser to act as a password manager.

Instead, for many years I've been using KeyPass, a free (open source) password manager that's been around for quite a while. The problem with KeyPass, as with a lot of open source software, is that it's a bit clunky. I never did get it to play nicely with browsers, and your passwords are saved in a file on your computer and/or in the cloud. If you lose the file, you lose your passwords.

Password managers do, of course, automatically generate passwords and save them securely. They can also (but not all do) store your password database securely in the cloud, so you don't have to worry about losing it (you can export a copy if you like). You can use it on all your devices with equal ease. They'll let you log in with a fingerprint on your phone.

A very nice feature is that they'll securely store payment information, so your browser, websites, and operating system don't have to hold that information. That means you don't have to trust them to manage this information properly. You only need to trust the password manager...

But can you trust password managers?

"Ah," you say, "but can you trust password managers?" That's not a bad or naive question at all; it's an excellent question. Consumer Reports, of all things, weighs in:

By default, LastPass, 1Password, and Dashlane store your password vault on their servers, allowing you to easily sync your data across devices. As a second benefit, if your computer crashes you won’t lose your vault.

But some people just really hate the idea of storing all their passwords on one site in the cloud—no matter what the company promises about its security measures, there's probably a bulls-eye painted on its encrypted back. If that sounds like you, it's possible to store your passwords locally.

Dashlane lets you do this by disabling the “Sync” feature in Preferences. This will delete your vault and its contents from the company’s servers. Of course, any further changes you make to your vault on your computer won’t show up on your other devices.

So what's my take? There are layers upon layers of security protecting your password repository, not least of which is the (hopefully well-chosen) master password to your password database. While you do have to choose the professionalism and honesty of a cloud-based password manager, I think that's their business, so I'm inclined to trust them. But, but!

I ask myself: what is more likely, that they become compromised (for whatever reason—let your imagination run wild) or instead that I lose my master password or all copies of my password database or somehow allow myself to be hacked? I think both are fairly unlikely, first of all. I am certainly inclined to distrust myself, especially over the long haul. And frankly, the idea that a security business is compromised seems unlikely, since security is their business. But could a password manager server be hacked? That is, again, a really good question, and you wouldn't be the first to ask it. Password manager company OneLogin was actually hacked, and the hackers could actually "decrypt encrypted data," the company said. Holy crap!

Also, which is most disastrous? Losing my password file would not be a disaster; I can easily generate new passwords; that's just a pain, not a disaster. But a hacker getting hold of my passwords in the cloud (no matter how unlikely)? That could be pretty damn bad.

After all, especially as password manager companies grow in size (as successful companies are wont to do), they naturally can be expected to become a honeypot for hackers. Another example of a hacked password management company was LastPass, which was hacked in 2015, although without exposing their users' passwords.

If you're like me, you have libertarian concerns about having to trust external entities (and especially, giant corporations) with your entire digital lives. You might also not want to trust (future?) dangerous governments with the power to force those corporations to give access to your entire digital life, then we're no longer talking about anti-crime cybersecurity. Then it looks like you shouldn'tsensibly put your password files in a corporate-managed cloud. Then you're having to trust people a little too much for my comfort. So you should manage their location yourself.

Then there are two further problems. First, can you be sure that it is impossible for anyone at the password management software company to crack your password database, even if you host it yourself? (Do they have a copy? Can they get access to a copy? If they have access, are there any back doors?)

Second, there's the practical issue: Without the cloud, how do you sync your passwords between all your devices? That feature is the main advantage of hosting your passwords in the cloud. So how can you do it automatically, quickly, and easily?

What self-hosted password manager is really secure?

Several password managers use the cloud, but what is stored in the cloud is only the encrypted data. All the login and decryption happens on your local device. This is called zero-knowledge security, and it might be a suitable compromise for many. I have one main issue with this: Especially if the software is proprietary, we must simply trust the company that that is, in fact, how it works. But that's a lot to ask. So I'll pass on these. I'll manage the hosting of my own passwords, thanks very much.

Here are my notes on various password managers:

  1. These all feature zero-knowledge security but seem not to allow the user to turn off cloud sync (maybe they do, I just couldn't find evidence that they do): 1Password, Keeper Password Manager, LastPass, LogMeOnce, Password Boss, Zoho Vault.
  2. Sticky Password Premium: Allows home wifi sync of passwords, which is just fine. Fills out forms, works on all your devices...except Linux devices. Linux does not seem to be supported. Next!
  3. RoboForm: Doesn't have a sync feature without using their cloud service, but hey! It has a Linux version! Might work on Brave, since Brave is built on Chromium and there is a Chrome extension. This was enough for me to install it (and it worked!), but it seems to be rather clunky and there were a few different things that didn't inspire confidence.
  4. Dashlane: This has zero-knowledge security, which isn't a bad thing, but in addition, it allows you disable sync. Whenever you turn it off, the password data is wiped from their servers (so they say). You can turn it on again and sync your devices, then turn it off again. This is within my tolerance. Also, Dashlane has a Linux version. In other respects, Dashlane seems very good. I installed it and input a password. The UX is very inviting—even the Linux version. It's expensive, though: it's a subscription, and it's $40 for the first year (if you use an affiliate link, I guess), and $60 if you buy it direct, which I'm guessing will be the yearly price going forward. That's pretty steep for a password manager.
  5. EnPass: Here's something unusual—a password manager that goes out of its way to support all platforms, including Linux and even Chromebook (not that I'd ever own one of those). Rather than an expensive subscription, like Dashlane, EnPass's desktop app is free, while the mobile version costs $10, and that's a one-time fee. They don't store passwords in the cloud; passwords are stored locally, but EnPass has some built-in ways to sync the passwords (including by wi-fi, like Sticky Password). The autofill apparently doesn't work too well, while more expensive options like Dashlane do this well, and lacks two-factor authentication, which would be nice, and other "luxury" features.

Installation and next steps

Dear reader, I went with EnPass.

So how did I get started? Well, the to do list was fairly substantial. I...

  1. Made a new master password. I read up on the strategy for making a password that is both strong, easy to remember, and easy to type. I ended up inventing my own strategy. (Do that! Be creative!) So my master password ended up being a bit of a compromise. While it's very strong, it's a bit of a pain to type; but it's pretty easy to remember. Whatever master password you chose, just make sure you don't forget it, or you'll lose access to your password database.
  2. Installed EnPass on Windows and Linux and tested it to see if it worked well in both. It does (so far).
  3. Used EnPass to sync the two installations using a cloud service. (I'll be replacing this with Resilio Sync soon enough, so it'll be 100% cloudless.) I confirmed that if I change a password in one, it is synced in the other.
  4. Imported all my Keepass passwords, then tested a bit more on both platforms to make sure nothing surprising is happening. So far, so good. My only misgiving about EnPass so far is that there doesn't seem to be a keyboard shortcut to automatically choose the login info. I actually have to double-click on the item I want, apparently.
  5. Deleted all passwords from all browsers, and ensure that the browser doesn't offer to save new passwords. Let the password manager handle that from now on. (No need for the redundancy; that's a bit of extra and unnecessary risk.)
  6. Installed on my cell phone, synced (without issue), and tested. (Annoyingly, the Enpass iOS app doesn't do autofill, but I gather that's in the plans.)
  7. Installed app and browser plugin on my (Mac) laptop. No issues there either.
  8. Deleted Keepass data in all locations. That's now redundant and a needless risk as well.

I'm now enjoying the new, secure, and easy access to my passwords on all my devices. I'm also happy to be free of browser password managers.

This was installment four in my series on how I'm locking down my cyber-life.


How I set up private email hosting for my family

Here's how I actually set up my own private email hosting—sanger.io! I already finished choosing a private email hosting provider. So what was the next step?

I still had to choose a plan with my chosen provider (InMotion Hosting, which didn't pay me anything for this) and pay for it. The details are uninteresting; anybody could do this.

Now the hard work (such as it was) began. I...

(1) Read over the domain host's getting-started guide for email. InMotion's is here, and if you have a different host, they're bound to have some instructions as well. If you get confused, their excellent customer service department can hold your hand a lot.

(2) Created a sanger.io email address, since that's what they said to do first. In case you want to email me, my username is 'larry'. (Noice and simple, ey?) InMotion let me create an email address, and I was rather confused about how this could possibly work since I hadn't pointed any DNS, hosted by NameCheap, to InMotion.

(3) Chose one of the domain hosts's web app options. For a webmail app (InMotion gave me a choice of three), I went with Horde, which is, not surprisingly, a little bit clunky compared to Gmail, but so far not worse than ZohoMail; we'll see. Unsurprisingly, when I tried to send an email from my old gmail account to my new @sanger.io account, the latter didn't receive it. Definitely need to do some DNS work first...

(4) Pointed my domain name to the right mail server. In technical jargon, I created an MX record on my DNS host. This was surprisingly simple. I just created an MXE Record on NameCheap, my DNS host for sanger.io, and pointed it to an IP address I found on InMotion. So basically, I just found the right place to paste in the IP address, and it was done. Now I can send and receive email via sanger.io (at least via webmail).

(5) Created email addresses for my other family members. Very easy.

(6) Installed a desktop email client. Why? I wasn't using one before because I just used Gmail in a browser and Apple's mail app on my phone. I could keep using webmail (on InMotion) but a desktop client is apt to be nicer. I'd tell you which one I used, but I'm not confident it's particularly good.

(7) Installed a new email client for my phone. As I no longer trust or want to support Apple if I can at all help it, I wanted to stop using their email client. I paid $10 for a privacy-touting mail client which is quite good so far: Canary Mail.

(8) Change the mail address registered with the big, consequential apps and services. This is the most labor-intensive step, and the step I most dreaded. Sure, it was a pain. But it turns out it was tremendously satisfying to be able to tell them to stop using my wretched Gmail address and instead to start using my slick new permanent and personalized address. Was that fun? Heck yeah it was! Anyway, such apps and services include (click on the links for useful privacy tips):

  • The massive Internet and tech services: Google, Microsoft, Apple.
  • The big social media/community accounts: Facebook, Twitter, YouTube, Quora, Medium, LinkedIn.
  • Companies I pay money to: Amazon, Netflix, PayPal, Patreon, InMotion, GoDaddy, NameCheap, Heroku, LifeLock, The Great Courses, any other bills.
  • Important stuff: my employer, the bank, medical info systems/apps, dentist, Coinbase.
  • Family, friends, and work and business people. Send them the message three times spread over a month or two, because if they're like me, they ignore such emails or don't act on them right away, and some old aunt of mine will keep sending mail to my gmail address for years and years. (I haven't actually done this one yet, but will soon. Gmail makes exporting of all your relevant contact info surprisingly difficult.)

(9) Create a Gmail forwarder! Buh-bye, Google! No need even to visit your crappy, biased, would-be totalitarian service for email any longer.

(10) Clean up and consolidation. There are a zillion little consequences when you change your email on all these big services, and I expect I'll be dealing with the consequences (nothing major!) for a few days or weeks to come. Among the things I know I'll have to do: (a) Install and configure mail clients on my laptop and iPad, and in other ways get those other devices working as expected again. (b) Update various email clients with address book information, as needed. (c) Actually collect my contacts from Google and Apple (harder than it sounds). (d) Change entries in my password manager from @gmail.com to @sanger.io. (e) Actually, get a new password manager...but that's a whole nuther thang. (f) Get Microsoft and Google and whatever else to forget my contacts...ditto.

This was installment three in my series on how I'm locking down my cyber-life.


How I'm locking down my cyber-life

Two problems of computer technology

My 2019 New Year's resolution (along with getting into shape, of course) is to lock down my cyber-life. This is for two reasons.

First, threats to Internet security of all sorts have evolved beyond the reckoning of most of us, and if you have been paying attention, you wonder what you should really be doing in response. My phone was recently hacked and my Google ID reset. The threats can come from criminals, ideological foes and people with a vendetta or a mission (of whatever sort), foreign powers, and—of special concern for some of us—the ubiquitous, massively intrusive ministrations of the tech giants.

Second, the Silicon Valley behemoths have decided to move beyond mere moderation for objectively abusive behavior and shutting down (really obvious) terrorist organizations, to start engaging in viewpoint censorship of conservatives and libertarians. As a free speech libertarian who has lived online for much of my life since 1994, these developments are deeply concerning. The culprits include the so-called FAANG companies (Facebook, Apple, Amazon, Netflix, Google), but to that list we must add YouTube, Twitter, and Microsoft. Many of us have been saying that we must take ourselves out of the hands of these networks—but exactly how to do so is evidently difficult. Still, I'm motivated to try.

At the root of both problems is simply that the fantastic efficiency and simplicity of computer technology is secured via our participation in networks and EULAs offered by massively rich and powerful corporations. Naturally, because what they offer is so valuable and because it is offered at reasonable prices (often, free), they can demand a great deal of information and control in exchange. This dynamic has led to us (most of us) shipping them boatloads of our data. That's a honeypot for criminals, authoritarians, and marketers.

There is nothing we can do about it—except to stop participating. That's why I want to kick the tech giants out of my life.

The threat to our privacy undermines some basic principles of the decentralized Internet that blossomed in the 90s and boomed in the 00s. The Establishment has taken over what was once a centerless, mostly privacy-respecting phenomenon of civil society, transforming it into something centralized, invasive, risky, and controlling. What was once the technology of personal autonomy has enabled—as never before—cybercrime, collectivization, mob rule, and censorship.

A plan

I don't propose to try to lead a political fight. I just want to know what can do personally to mitigate my own risks.

I'm not sure of the complete list of things that I ought to do. I will examine some of these in more depth (in other blog posts, perhaps) before I take action, but others I have already implemented.

  1. Stop using Chrome. (Done.) Google collects massive amounts of information from us via their browser. The good news is that you don't have to use it, if you're one of the 62% of the people who do. I've been using Firefox; but I haven't been happy about that. The Mozilla organization, which manages the browser, is evidently dominated by the Silicon Valley left; they forced out Brendan Eich, one of the creators of Firefox and the JavaScript programming language, for his political views. Frankly, I don't trust them. I've switched to Eich's newer browser, Brave. I've had a much better experience using it lately than I had when I first tried it a year or two ago and when it was still on the bleeding edge. Brave automatically blocks ads, trackers, third-party cookies, encrypts your connections—and, unlike Google, they don't have a profile about you. It's quite good and a pleasure to use. There might be a few rare issues (maybe connected with JavaScript), but when I suspect there's a problem with the browser, I try whatever I'm trying to do in Firefox, which is now my fallback. There's absolutely no need to use Chrome for anything but testing, and that's only if you're in Web development. By the way, the Brave iOS app is really nice, too.
  2. Stop using Google Search (when possible). (Done.) I understand that sometimes, getting the right answer requires that you use Google, because it does, generally, give the best search results. But I get surprisingly good results from DuckDuckGo, which I've been using for quite a while now. Like Brave and unlike Google, DuckDuckGo doesn't track you and respects your privacy. You're not the product. It is easy to go to your browser's Settings page and switch.
  3. Stop using gmail. (Done.) This was harder, and figuring out and executing the logistics of it was a chore—it involved changing all the accounts, especially the important accounts, that use my gmail address—but I'm totally committed to taking this step. I had wanted to do this for a while, but the sheer number of hours it was going to take (and did take) to make the necessary changes was daunting. Besides, I was tired of switching email addresses. I want to have one email address for the rest of my life. My new email address resides at sanger.io, a domain that my family will be able to use. Here's how I chose an email hosting service to replace Gmail. And here's how I set up private email hosting for my family.
  4. Start using (better) password management software. And never use another social login again. (Done.) If you're one of those people who uses the same password for everything, especially if it's a simple password, you're a fool and you need to stop. But if you're going to maintain a zillion different passwords for a zillion different sites, how? Password management software. I've been using the free, open source KeePass, which is secure and it works, but it doesn't integrate well with browsers, or let me save my password date securely in the cloud (or maybe better, on the blockchain). So I'm going to get a better password manager and set it up on all my devices. This is an essential to locking down my cyber-life. One of the ways Facebook, LinkedIn, et al. insinuate themselves in our cyber-lives is by giving us an easy way to log in to other sites. But that makes it easier for them to track us everywhere. Well, if you install a decent password manager, then you don't have to depend on social login services. Just skip them and use the omnipresent "log in with email" option every time. Your password manager will make it even easier than social login systems did. UPDATE: I switched to EnPass and told browsers to stop tracking my passwords. Read more.
  5. Stop using iCloud to sync your iPhone data with your desktop and laptop data; replace it with wi-fi sync. (Done.) If you must use a smartphone, and if (like mine) it's an iPhone, then at least stop putting all your precious data on Apple servers, i.e., on iCloud. It's very easy to do. After you do that, you can go tell iTunes to sync your contacts, calendars, and other information via wi-fi; here's how.
  6. Take control of my contact and friend lists. I've been giving Google, Apple, and Microsoft too much authority to manage my contacts for me, and I've shared my Facebook and other friends lists too much. I'm not sure I want these contacts knowing my contacts and friends, period. I don't know what they're doing with the information, or who they're sharing it with, really. Besides, if my friends play fast and loose with privacy settings, my privacy can suffer—and vice-versa. So I'm going to start maintaining my own contacts, thanks very much, and delete the lists I've given to Google and Microsoft. I'm glad I've already stopped putting this information on iCloud.
  7. Stop using gcal. I just don't trust Google with this information, and frankly, gcal isn't all that. I mean, it's OK. The only inconvenience is that I'm going to have to tell my workmates I don't use it, but that they should put my name in without my email address, and I'll add the appointment to my own calendar. This will involve installing a calendar app on my phone (I don't want to keep using Apple's) and figuring out how to sync my calendar data without the cloud, so I still have up-to-date copies of on all my devices.
  8. Switch to Linux. (In progress.) I've been using a Linux (Ubuntu) virtual machine for programming (and a fair bit of other stuff) for a while. Linux is stable and usable for most purposes, and while it still has issues of one sort or another, on balance those issues aren't nearly as severe as those associated with using products by Microsoft and Apple. When necessary, I can use my Mac laptop and will continue to maintain a Windows partition, e.g., for when I need to use Camtasia. But I'll soon (finally) be putting Ubuntu on a partition on my workstation and switching to that as my main work environment. Linux is generally more secure, gives the user more control, and most importantly does not have a giant multinational corporation behind it that wants to take and share your information.
  9. Nail down a backup plan. (In progress.) If you're going to avoid using so much centralized and cloud software, you've got to think not just about security but about backing up your data. I've got a monster of a backup drive, as well as backup software and knowledge of how to use it, but what I don't have are excellent habits to use this stuff regularly. I don't even have regularly-scheduled backups, which I really should do. But really getting my old files organized, especially if I want to keep copies of my old emails instead of relying on frickin' Google to do it—and doubly so if I want to download my old gmail stuff, or even (gasp) not use a cloud storage service at all.
  10. Stop using cloud storage. "Now," you're going to tell me, "you're getting unreasonable. This is out of hand. Not back up to iCloud, Google Drive, DropBox, Box, or OneDrive? Not have the convenience of having the same files on all my machines equally available? Are you crazy?" I'm not crazy. You might not realize what is now possible without the cloud. If you're serious about this privacy stuff and you really don't trust big tech anymore—I sure don't—then yeah. This is necessary too. One option is Resilio Sync, moving files between your devices via deeply encrypted networks (via a modified version of the BitTorrent protocol), with the files never landing anywhere but on your devices. Another option is to use a NAS (network attached storage device), which is basically your very own cloud server that only you can access, but you can access it from anywhere via an encrypted Internet connection.
  11. Nail down a social media use policy. Maybe quit some for good, really this time. (In progress.) I'm extremely ambivalent about my ongoing use of social media. I took a break for over a month (which was nice), but I decided that it is too important for my career to be plugged in to the most common networks. If I'm going to use them, I feel like I need to create a set of rules for myself to follow—so I don't get sucked back in. I also want to reconsider how I might use alternative social networks, like Gab (which has problems), and social media tools that make it easy both to post and to keep an easily-accessible archive of my posts. One of my biggest problems with all social media networks is that they make it extremely difficult to download and control your own friggin' data—how dare they. Well, there are tools to take care of that...
  12. Study and make use of website/service/device privacy options. (In progress.) Google, Apple, Facebook, Twitter, YouTube, etc., all have privacy policies and options available to the user. It is time that I studied and regularly reviewed them (as I have done only with Facebook and a bit with Google), and put shields up to maximum.
  13. Also study the privacy of other categories of data. Banking data, health data, travel data (via Google, Apple, Uber, Yelp, etc.), shopping data (Amazon, etc.)—it all has unique vulnerabilities that is important to be aware of. I'm not sure I've done all I can to lock it down. So I want to do that.
  14. Subscribe to a VPN? Websites can still get quite a bit of info about you from your IP address and by listening in on any data that happens to be unencrypted via your web connection. VPNs solve those problems by making your connection to the Internet anonymous. The big problem with VPNs, and the reason I probably won't do this, is that they slow down your Internet connection. They also add new complexity to your life (e.g., if you get the wrong VPN, you might not be able to connect to some services, like Netflix, through the VPN). But it's a great step to take if you're serious about privacy, if you can get around or handle the slowness problem. A nice fallback is the built-in private windows in Brave that are run on the Tor network, which operates on a similar principle to VPNs.
  15. Figure out how to change my passwords regularly, maybe. I might want to make a list of all my important passwords and change them quarterly everywhere, as a sort of cyber-hygiene. Why don't we make a practice of this? Because it's a pain in the ass and most people don't know how to use password management software, that's why. Besides, security experts actually discourage regular password changing, but that's mainly because most people are bad at making and tracking secure passwords. Well, if you use password managers, that part isn't so hard. But it's also because we really don't have a realistic plan to do it. Well, I'm going to think hard about making one and, maybe, try to follow it, making use of whatever automated tools are available (such as this).
  16. Get identity theft protection. (Done.) After my phone was hacked, I finally did something I've been meaning to do for a long time—subscribe to an identity theft protection service. The one I use is LifeLock, and so far it seems to be quite good. If you don't know or care about identity theft, that's probably because you've never seen weird charges pop up on your card, or had your card frozen by your bank, or whatever. LifeLock doesn't prevent these issues by itself, but it does make it a lot easier to deal with them if they happen.

What have I left out?

Are you going to join me in this push toward greater privacy and autonomy? Let me know—or, of course, you can keep it to yourself.


How the government can monitor U.S. citizens

Just what tools do American governments—federal, state, and local—have to monitor U.S. citizens? There are other such lists online, but I couldn't find one that struck me as being quite complete. This list omits strictly criminal tracking, because while criminals are citizens, actual crime obviously needs to be tracked.

  1. First, there's what you yourself reveal: the government can use whatever information you yourself put into the public domain. For some of us (like me), that's a heck of a lot of information.
  2. Government also tries to force tech companies to reveal our personal information, ostensibly to catch terrorists and criminals. The FBI and NSA have both been in the news about this.
  3. The NSA famously tracks our email and phone calls. They might be looking for terrorism and crime, but we're caught in the net too.
  4. The IRS, obviously, tracks your income, business information, and much else. That certainly qualifies as government monitoring.
  5. State, local, and school district tax systems do the same.
  6. The FBI's NSAC (National Security Branch Analysis Center) has hundreds of millions of records about U.S. citizens, many perfectly law-abiding.
  7. The State Dept., Homeland Security, and others contribute to systems that include biometric information on some 126 million citizens—that means fingerprints, photos, and other biographical information.
  8. For a small number of citizens—740,000 to 10 million, depending on the system—there is a lot more information available, not just because the people are actual or terrorists or criminals, but only because they are suspected of such activity. If someone in government with the authority thinks you fall into broad categories that make you possibly dangerous, they can start collecting a heck of a lot more information about you.
  9. The Census Bureau tracks our basic demographic information every ten years.
  10. U.S. school students in at least 41 states are tracked by Statewide Longitudinal Data Systems, including demographics, enrollment status, test scores, preparedness for college, etc.
  11. Many and various public cameras, including license plate readers, are used by many local authorities, mainly for crime prevention.
  12. Monitoring by police will be easier in the near future: As an expert on the subject, law professor Bill Quigley, puts it, "Soon, police everywhere will be equipped with handheld devices to collect fingerprint, face, iris and even DNA information on the spot and have it instantly sent to national databases for comparison and storage."
  13. The internet of things will be another avenue in which government will increasingly be able to view our habits.

So...explain to me again how we have a right to privacy under the Fourth Amendment.

By the way, it is a conceptual mistake to suppose that there is any one person or group of people who have access to (and care about) the information in all of these databases. How the databases are used is carefully circumscribed by law, obviously, and just because the information is in a database, it doesn't follow that there has been a privacy violation. But it does raise concerns in the aggregate: the extent to which we are monitored might be a problem even if most programs are individually constitutionally justifiable.

In short, is there any point at which we say "enough is enough"? Or do we grudgingly give government technical access into every area of our lives and hope that the law controls how the information is being used?

In the comments, please let me know what I've missed and I can do updates.

Sources: Common Dreams, ACLU, Ed.gov, Forbes, Wired, Guardian, and my own experience working at the Census Bureau long ago.


Why Edward Snowden deserves a pardon, explained in 10 easy steps

Let me put this briefly and simply. The government should not be snooping on us. But they started anyway. That was wrong and unconstitutional. When they did, they made their snooping program secret. That was wrong twice over, a cover-up of a wrong. Then they actually lied about the existence of the program to Congress, and the bureaucrat who perjured himself doing so is getting off scot free. Trebly wrong. And now when a low-level contractor, at tremendous risk to himself, courageously blows the whistle on this operation, he is threatened with extradition and very severe prosecution, rather than being pardoned. Quadruply wrong!

1. The Fourth Amendment is clear: the government may not indiscriminately snoop our private things. In the language of the Amendment, American citizens have the right to be "secure" in their "effects" against "unreasonable searches" except "upon probable cause" and a specification of the things to be searched.

2. But indiscriminate snooping is just what PRISM does. A surveillance program that regularly searches private telephone call metadata, as well as private Internet data, of virtually all American citizens seems on its face to vi0late the Fourth Amendment.

3. So PRISM is illegal and wrong. It sure looks unconstitutional.

4. And we had a right to know about it. Why wasn't the decision to start PRISM put before an open, public Congress? It was a decision with enormous potential consequences; it seems obvious that the American people had a right to decide whether it would be surveilled to this extent.

5. So it is doubly wrong that the PRISM program was hidden from us. We should have been able to voice our concerns to our representatives and the President when this program was started. But because it was implemented in secret, we couldn't. When it comes to how the entire population of the U.S. is treated--not just terrorism suspects--we have a constitutional republican democracy, not a secret government.

6. James Clapper's perjury is outrageous. When National Director of Intelligence James Clapper was asked by Sen. Ron Wyden on March 12, 2013, "Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?" and he answered, "No, sir … not wittingly," he was not merely committing perjury. He was lying about a program that Americans had a right to know about, that it was important that they know about, because it affects all Americans' constitutional rights, and they have a right to assess and object to just such a program.

7. Edward Snowden is a hero for revealing the facts about PRISM. If it hadn't have been for the courageous whistleblowing of Mr. Snowden, we would still be ignorant of this massive violation of our constitutional rights. Considering the huge risks to himself, his whistleblowing was simply heroic.

8. It is shockingly and trebly wrong that Edward Snowden is being persecuted for whistleblowing. It is true that, in leaking classified documents, Edward Snowden broke the law. But he did so in order to reveal a much more dangerous sort of official lawbreaking. He arguably had a moral obligation--and, fortunately, the courage--to do so, since he observed that no one else in the government was making the program public. It is outrageous that a person who reveals a wrong perpetrated by a supposedly open and democratic government is persecuted for it by that same government.

9. Instead, those responsible for PRISM--and for making it secret--should be made to answer for their actions. Even if they are not punished, they should be made to answer publicly for their clear abuse of their public trust. They should not have made this unconstitutional program, and just as importantly, it should not have been kept secret from the American people.

10. It will be quadruply wrong if Edward Snowden is not pardoned. "Often the best source of information about waste, fraud, and abuse in government is an existing government employee committed to public integrity and willing to speak out. Such acts of courage and patriotism ... should be encouraged rather than stifled." Who said this, and where? A libertarian defending Edward Snowden in Reason, perhaps? Not exactly. It was on the Obama transition team's website in 2009, back when Obama was being lauded as a "friend" to whistleblowers.

President Obama should pardon Snowden and, probably, Clapper too--and, on the assumption that they had laudible intentions, everyone involved in the creation of the program.

And then President Obama should actually encourage a public debate, and Congressional vote, on whether PRISM should continue to exist.

Wouldn't that be something.


A Defense of Modest Real Name Requirements

Lunchtime speech at the Harvard Journal of Law & Technology 13th Annual Symposium: Altered Identities, Harvard University, Cambridge, Massachusetts, March 13, 2008.

I. Introduction

Let me say up front, for the benefit of privacy advocates, that I agree entirely that it is possible to have an interesting discussion and productive collaborative effort among anonymous contributors, and I support the right to anonymity online, as a general rule. But, as I'm going to argue, such a right need not entail a right to be anonymous in every community online. After all, surely people also have the right to participate in communities in which real-world identities are required of all participants—that is, they have a right to join voluntary organizations in which everyone knows who everyone else really is. There are actually quite a few such communities online, although they tend to be academic communities.

Before I introduce my thesis, I want to distinguish two claims regarding anonymity: first, there is the claim that personal information should be available to the administrators of a website, but not necessarily publicly; and second, there's the claim that real names should appear publicly on one's contributions. I will be arguing for the latter claim, that real names should appear publicly.

But actually, I would like to put my thesis not in terms of how real names should appear, but instead in terms of what online communities are justified in requiring. Specifically in online knowledge communities—that is, Internet groups that are working to create publicly-accessible compendia of knowledge—organizers are justified in requiring that contributors use their own names, not pseudonyms. I maintain that if you want to log in and contribute to the world’s knowledge as part of an open, community project, it’s very reasonable to require that you use your real name. I don't want, right now, to make the more dramatic claim that we should require real names in online knowledge communities—I am saying merely that it is justified or warranted to do so.

Many Internet types would not give even this modest thesis a serious hearing. Most people who spend any time in online communities regard anonymity, or pseudonymity, as a right with very few exceptions. To these people, my love of real names makes me anathema. It is extremely unhip of me to suggest that people be required to use their real names in any online community. But since I have never been or aspired to be hip, that’s no great loss to me.

What I want to do in this talk is first to introduce the notion of an Internet knowledge community, and discuss how different types handle anonymity as a matter of policy. Then I will address some of the main arguments in favor of online anonymity. Finally, I will offer two arguments that it is justified to require real names for membership in online knowledge communities.

II. Some current practices in online knowledge communities

First, let me give you a definition for a phrase I'll be using throughout this talke. By online knowledge community I mean any group of people that gets organized via the Internet to create together what at least purports to be reliable information, or knowledge. And I distinguish between a community that purports to create reliable information from a community that is merely engaging in conversation or mutual entertainment. So this excludes social networking sites like MySpace and FaceBook, as well as most blogs, forums, and mailing lists. Digg.com might be a borderline case; calling that link rating website a “knowledge community” is again straining the definition, because I’m not sure that many people really purport to be passing out knowledge when they vote for a Web link. They’re merely stating their opinion about what they find interesting; that’s something different from offering up knowledge, it seems to me.

I want to give you a lot of examples of online knowledge communities, because I want to make a point. The first example that comes to mind, I suppose, would be Wikipedia, but also many other online encyclopedia projects, such as the Citizendium, Scholarpedia, Conservapedia, among many others (and these are only in English, of course). Then there are many single-subject encyclopedia projects, such as, in philosophy, the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy; in biology, there is now the Encyclopedia of Life; in mathematics, there is MathWorld; in the Earth Sciences, there is the Encyclopedia of Earth; and these are only a few examples.

But that’s just the encyclopedia projects. There are many other kinds of online knowledge communities. Another sort would be the Peer to Patent Project, started by NYU law professor Beth Noveck. Perhaps you could consider as an online knowledge community the various pre-print, or e-print, services, most notably arXiv, which has hundreds of thousands of papers in various scientific disciplines. This might be straining the definition, however. If you consider a pre-print service an online knowledge community, then perhaps you should consider any electronic journal such a community; indeed, perhaps we should, but I won’t argue the point. Anyway, I could go on multiplying examples, but I think it would get tedious, so I’ll stop there.

The examples I've given so far have been mostly academic and professional communities. And here I finally come to my point: out of all the projects named, the only ones in which real names are not required, or at least not strongly encouraged, are Wikipedia and Conservapedia. This, of course, proves only that when academics and professionals get online, they tend to use their real names, which shouldn’t be surprising to anyone.

But there are actually quite a few other online knowledge communities that don’t require the use of real names. I have contributed a fair bit to one that is a very useful database of Irish traditional music—it’s got information about tunes and recordings--it's called TheSession.org. There are many other hobbyist communities that don’t require real names; just think of all the communities about games and fan fiction. Of course, then there are all the communities to support open source software projects. I doubt a single one of those requires the use of real names.

I haven't had time to do (or even find) a formal study of this, but I suspect that, as a general rule, academic projects either require or strongly encourage real names, while most other online knowledge communities do not. This should be no great surprise. Academics are used to publishing under their real names, but this is mostly for professional reasons; with the advent of the Internet, many other people are contributing to the world's knowledge, in various Internet projects, but they have no professional motivation to use their own real names. For some people--for example, a lot of Wikipedians--privacy concerns far outweigh any personal benefit they might get for putting their names on their contributions.

So, how should we think about this? Is it justifiable to demand anonymity in every online community, on grounds of privacy, or any other grounds? I don't think so.

III. Some arguments for anonymity

Next, let's consider some arguments for anonymity as a policy, and briefly outline some replies to them. By no means, of course, do I claim to have the last word here. I know I am going very quickly over some very complex issues.

A. The argument from the right to privacy. The most important and I think most persuasive argument that anonymous or pseudonymous contribution should be permitted in online communities is that this protects our right to privacy. The use of identities different from one’s real-world identity helps protect us against the harvesting of data by governments and corporations. Especially in open Internet projects, a sufficiently sophisticated search can produce vast amounts of data about what topics people are interested in, and much other information potentially of interest to one's employers, corporate competitors, criminals, government investigators, and marketers. This is a major and I think growing concern about Google, as well as many online communities like MySpace and FaceBook. Like many people, I share those concerns, even though personally my life is an open book online--maybe too open. Still, I think privacy is an important right.

But I want to draw a crucial distinction here. There is a difference between, on the one hand, using a search engine, or sharing messages, pictures, music, and video with one's friends and family, and on the other hand, adding to a database that is specifically intended to be consulted by the world as a knowledge reference. The difference is very obvious if you think about it. Namely, there is simply no need to make your name or other information publicly available, for you to do all the former activities. When you are contributing to YouTube, for example, you can achieve your aims, and others can enjoy your productions, regardless of the connection or lack thereof between your online persona and your real-world identity. So, in those contexts, the connection between your persona and your identity should be strictly up to you. For example, whether you let a certain other person, or a marketer, see your FaceBook profile also should be strictly up to you. These online services have become extensions of our real lives, the details of which have been and generally should remain private, if we want them to be.

We have a clear interest in controlling information about our private lives; we have that interest, of course, because it can be so easily abused, but also because we want to maintain our own reputations without having the harsh glare of public knowledge shone on everything we do. Lack of privacy changes how we behave, and indeed we might behave more authentically, and we might have more to offer our friends and family, if we can be sure that our behavior is not on display to the entire world.

I've tried to explain why I support online privacy rights in most contexts. But I say that there is a large difference between social networking communities like MySpace and FaceBook, on the one hand, and online knowledge communities like Wikipedia and the Citizendium, on the other hand. When you contribute to the latter communities, the public does have a strong interest in knowing your name and identity when you contribute. This is something I will come back to in the next part of this talk, when I give some positive arguments for real names requirements.

B. The argument from the freedom of speech. But back to the arguments for anonymity. A second argument has it that not having to reveal who you are strengthens the freedom of speech. If you can speak out against the government, or your employer, or other powerful or potentially threatening entities, without fear of repercussions, that allows you to reveal the full truth in all its ugliness. This is, of course, the classic libertarian argument for anonymous speech.

The most effective reply to this is to observe that, in general, there is no reason that online collaborative communities should serve as a platform for people who want to publish without personal repercussions. There are and will be many other platforms available for that. Indeed, specific online services, such as WikiLeaks, have been set up for anonymous free speech. Long may they flourish. Moreover, part of the beauty of the classical right to freedom of speech is that it provides maximum transparency. Anyone can say anything—but then, anyone else can put the first person’s remarks in context by (correctly) characterizing that person. Maximum transparency is the best way to secure the benefits of free speech.

I suspect it is a little disingenuous to suggest that anonymous speech is generally conducive to the truth in online knowledge communities. The WikiScanner, and the various mini-scandals it unearthed, actually helps to illustrate this point. It illustrated something that was perfectly obvious to anyone familiar with the Wikipedia system: that persons with a vested interest in a topic can and do make anonymous edits to information about that topic on Wikipedia. They are not telling truth to power under the cover of anonymity. Rather, they are using the cover of anonymity to obscure the truth. They would behave differently, and would be held to much more rigorous standards, if their identities were known. I want to suggest, as I'll elaborate later, that full transparency--including knowledge of contributor identities--is actually more truth-conducive than a policy permitting anonymity.

IV. Two reasons for real name requirements

Now I am going to shift gears, and advance two positive arguments for requiring real names in online knowledge communities. One argument is political: it is that communities are better governed if their members are identified by name. The other argument is epistemological: it is that the content created by an "identified" community will be more reliable than content created by an "anonymous" community.

A. The argument from enforcement. The first argument is one that I think you legal theorists might be able to sink your teeth into. Let me present it in a very abstract way first, and then give an example. Consider first that if you cannot identify a person who breaks a rule, it is impossible to punish that person, or enforce the rule in that case. Forgive me for getting metaphysical on you, but the sort of entity that is punished is a person. If you can't identify a specific person to punish, you obviously can't carry out the punishment. This is the case not just if you can't capture the perpetrator, but also if you have captured him but you can't prove that he really is the perpetrator. That's all obvious. But it's also the case that you can't carry out the punishment if the perpetrator is clearly identifiable in one disguise, but then changes to another disguise.

So far so good, I hope. Next, consider a principle that I understand is sometimes advanced in jurisprudence, which is that there is no law, in fact, unless it is effectively enforced. A law or rule on the books that is constantly broken and never enforced is not really, in some full-blooded important sense, a law. For example, the 55-mile-per-hour speed limit might not be a full-blooded rule, since you can drive 56 miles per hour in a 55 mile per hour zone, and never get a ticket. Obviously I am not denying that the rule is on the books; obviously it is. I am merely saying that the words on the books lack the force of law.

Now suppose, if you will, that in your community, your worst offenders can only rarely be effectively identified. You have to go to superhuman lengths to be able to identify them. In that case, you've got no way to enforce your rules: your hands are tied by your failure to identify your perpetrators effectively. But then, if you cannot enforce your rules, your rules lack the force of law. In a real sense, your community lacks rules.

I want to suggest that the situation I've just described abstractly is pretty close to the situation that Wikipedia and some other online communities are in. On Wikipedia, you don't have to sign in to make any edits. Or, if you want to sign in, you can make up whatever sort of nonsense name you like; you don't have to supply a working e-mail address, and you can make as many Wikipedia usernames as your twisted heart desires. Of course, no one ever asks what your real name is. In fact, Wikipedia has a rule according to which you can be punished for revealing the real identity behind a pseudonym.

This all means that there is no effective way to identify many rulebreakers. Now, there is, of course, a way to identify what IP address a rulebreaker uses, but as anyone who knows about IP addresses knows, you can't match an IP address uniquely to a person. Sometimes, many people are using the same address; sometimes, one person is constantly bouncing around a range of addresses, and sharing that range with other people. So there is often collateral damage when you block the IP address, or a range of addresses, of a perpetrator. Besides, anyone with the slightest bit Internet sophistication can quickly find out how to get around this problem, by using an anonymizer or proxy.

That there is no effective way to identify some rulebreakers is a significant practical problem on Wikipedia, in fact. Wikipedians complain often and bitterly about anonymous, long-term, motivated trouble-makers who use what are called "sockpuppets"--that is, several accounts controlled by the same person. Indeed, this is Wikipedia's most serious problem, from the point of view of true-believer Wikipedians.

In this way, Wikipedia lacks enforceable rules because it permits anonymity. I think it's a serious problem that it lacks enforceable rules. Here's one way to explain why. Suppose that we say that polities are defined by their rules. If that is the case, then Wikipedia is not a true polity. In fact, no online community can be a polity if permits anonymous participation. But why care about being a polity? For one thing, Wikipedia and other online communities, which typically permit anonymity, are sometimes characterized as a sort of democratic revolution. On my view, this is an abuse of the term "democratic." How can something be democratic if it isn't even a polity?

There is another, shorter argument that anonymous communities cannot be democratic. First, observe that if it is not necessary to confirm a person’s identity, the person may vote multiple times in a system in which voting takes place. Moreover, if the identities of persons engaged in community deliberation need not be known, one person may create the appearance of a groundswell of support for a view simply by posting a lot of comments using different identities. But, for voting and deliberation to be fair and democratic, each person’s vote, and voice, must count for just one. Therefore, a system that does not take cognizance of identities is inherently unfair and undemocratic. I think anonymous communities cannot be fair and democratic.

But why should we care about our online communities being fair, democratic polities? Perhaps their governance is relatively unimportant. When it comes to whether a link is placed on the front page of Digg.com, or what videos are highly rated on YouTube, does it really matter if it's not all quite on the up-and-up?

Maybe not. I am not going to argue about that now. But matters are very different, I want to maintain, with online knowledge communities, which is the subject of this paper. Knowledge communities, I think, must be operated as fair, democratic, and mature polities, if they are open to all sorts of contributors and they purport to contain reliable information that can be used as reference material for the world. It makes a difference, I claim, if an online community purports to collect knowledge, and not just talk and share media among friends and family.

Why does it matter if a community collects knowledge? First, it's because knowledge is important; we use information to make important decisions, so it is important that our information be reliable. If you are not convinced, consider that many people now believe that false information caused the United States to go to war in Iraq. Consider how many innocent people are in prison because of bad information. These days, two top issues for scientists are also political issues: global warming and teaching evolution in the schools. Scientists are very concerned that persons in politically-powerful positions do not have sufficient regard for well-established knowledge. Whatever you think of these specific cases, all of which are politically charged, it seems clear enough that there is no shortage of examples that demonstrate that we do, as a society, care very much that our information be reliable--that we do not merely have random unjustified beliefs, but that we know.

The trouble, of course, is that as a society--especially as a global Internet society--we do not all agree on what we know. Therefore, when we come together online from across the globe to create collections of what call knowledge, we need fair, sensible ways to settle our disputes. That means we must have rules; so we must have a mature polity that can successfully enforce rules. And, to come back to the point, that means we must identify the members of these polities; we are well justified to disallow anonymous membership.

B. The epistemological argument. Finally, I want to introduce briefly an epistemological argument for real names requirements, which is distinguishable from the argument which I just introduced, even though it had epistemological elements too. Now I want to argue that using our real identities not only makes a polity possible, it improves the reliability of the information that the community outputs.

Perhaps this is not obvious. As I said earlier, some people maintain that knowledge is improved when people are free to "speak truth to power" from a position of anonymity. But, as I said, I suspect that in online communities like Wikipedia, a position of anonymity is used specifically to obscure the truth more than reveal it. Now, in all honesty, I have to admit that this might be rather too glib. After all, most anonymous contributors to Wikipedia aren't trying to reveal controversial truths, or cover them up; they are simply adding information, which is more or less correct. Their anonymity doesn't shield them from wrongdoing, it merely shields their privacy. As a result, why not say that the vast quantity of information found in Wikipedia--which is very useful to a lot of people--is directly the result of Wikipedia's policy of anonymity? In that case, anonymity actually increases our knowledge--at least the sheer quantity of our knowledge.

Can I refute that argument? I'm not sure I can, nor would I want to if it is correct. The point being made is empirical, and I don't know what the facts are. If anonymity does in fact have that effect, hooray for anonymity. I merely want to make a few relevant points.

I think that in the next five to ten years, we will see whether huge numbers of people are also willing to come together to work under their own real names. I don't pretend to be unbiased on this point, but I think they will be. I don't think that anonymity is badly wanted or needed by the majority of the potential contributors to online knowledge communities in general. Having observed these communities for about fifteen years, my impression is that people get involved because they love the sense of excitement they get from being part of a growing, productive community. My guess is that anonymity is pretty much irrelevant to that excitement.

Regardless of the role of anonymity in the growth of online resources, a real names policy has a whole list of specific epistemological benefits that a policy of anonymity cannot secure. Consider a few such benefits.

First, the author of a piece of work will be more careful than if she puts her real name on it: her real-world reputation is on the line. And I suppose being more careful will lead to more reliable information. This is quickly stated, and very plausible, but it is a very important benefit.

Second, a community all of whose members use their real names will, as a whole, have a better reputation than one that is dominated by pseudonymous people. We naturally trust those who are willing to tell us who they are. As a result, the community naturally has a reputation to live up to. There are no similar expectations of good quality from an anonymous community, and hence no high expectations to live up to.

Third, it is much harder for partisans, PR people, and others to use the system to cover up unpleasant facts, or to present a one-sided view of a complex situation. When real names are used, the community can require the subjects of biographies and the principals of organizations to act as informants. The Citizendium does this. Wikipedia can't, because this would require that people identify themselves.

V. Conclusion

I'm going to wrap up now. I've covered a lot of ground and I went over some things rather fast, so here is a summary.

I began by defining "online knowledge community," and showing with a number of examples that online academic communities tend to use (or strongly emphasize the use of) real names. Other sorts of online communities generally permit or encourage anonymity, because there is no career benefit to being identified, while there is a definite interest in privacy. I considered two main arguments (though I know there are others) for permitting anonymity as a matter of policy. One argument starts from the premise that we have an interest in keeping our personal lives private; I admit that premise, but I say that, when it comes to knowledge communities in particular, society has an overriding interest in knowing your identity. Another argument is a version of the classical libertarian argument for anonymous speech. I grant that society needs venues in which anonymous speech can take place; I simply deny that all online knowledge communities need play that role. Besides, anonymity is probably used more as a way to burnish public images than it is to "speak truth to power."

In the second half of the paper, I considered two main arguments (though again, there are others) for requiring real names as matter of policy in online knowledge communities. In the first, I argued that rules cannot be effectively enforced when rule-breakers cannot be identified. This is a problem, because we would like online knowledge communities to be fair and democratic polities; but when community members cannot be uniquely identified, this violates the principle of one person, one voice, one vote. Then I argued that the requirement of real names actually increases the reliability of a community's output. Since we want the output of knowledge communities, in particular, to be maximally reliable, we are well justified in requiring real names in such communities.


A compromise position that I favor would involve requiring real users’ names to be visible to other contributors; allowing them to mask their real names to non-contributors; and legally forbidding the use of our database to mine personal information. This compromise does not settle the theoretical issue discussed in the arguments that follow, of course.