Stop giving your information away carelessly!

27 tips for improving your cyber-hygiene

Who is most responsible for your online privacy being violated?

You are.

Privacy is one of the biggest concerns in tech news recently. The importance of personal privacy is something everybody seems to be able to agree on. But if you're concerned about privacy, then you need stop giving your information away willy-nilly. Because you probably are.

Well, maybe you are. See how many of the following best practices you already follow.

  1. Passwords. Install and learn how to use a password manager on all your devices. There are many fine ones on the market.
  2. Let your password manager generate your passwords for you. You never even need to know what your passwords are, once you've got the password managers set up.
  3. Make sure you make a secure password for the password manager!
  4. Stop letting your browser save passwords. Your password manager handles that.
  5. If ever you have reason to send a password to another person online, break it into two or more files (texts, emails, whatever) in different media, then totally delete those files. Also, some password managers help with this.
  6. Credit cards and other personal info. Stop letting your browser save your credit cards. Your password manager handles that.
  7. Stop letting web vendors save your credit card info on their servers, unless absolutely necessary (e.g., for subscriptions). Again, your password manager handles that. Maybe you should go delete them now. I'll wait.
  8. If you give your credit card info out online, always check that the website has the "lock" next to its address on the address bar. That means it uses the https protocol (i.e., uses encryption).
  9. Stop answering "additional security" questions with correct answers, especially correct answers that hackers might discover with research. Treat the answer fields as passwords, and record them in your password manager.
  10. Stop filling out the "optional" information on account registration forms. Give away only the required information.
  11. Americans, for chrissakes stop giving out your social security number and allowing others to use it as an ID, unless absolutely required.
  12. Stop giving your email address out when doing face-to-face purchases. Those companies don't actually need it.
  13. Stop trusting the Internet giants with your data. Consider moving away from Gmail. Google has admitted it reads your mail—all the better to market to you, my dear. Gmail isn't all that, really.
  14. Maintain your own calendar. When meeting, let others add your name, but don't let them add your email address, if you have a choice.
  15. Maintain your own contacts. No need to let one of the Internet giants take control of that for you. It's not that hard. Then have them delete their copies.
  16. If you're an Apple person, stop using iCloud to sync your devices. Use wi-fi instead.
  17. Browser and search engine hygiene. Use a privacy-respecting browser, such as Brave or Firefox. (This will stop your browsing activity from being needlessly shared with Google or Microsoft.)
  18. If you must use a browser without built-in tracking protection (like Chrome), then use a tracker-blocking extension (like Privacy Badger).
  19. Use a privacy-respecting search engine, such as DuckDuckGo or Qwant. (Ditto.)
  20. Social media, if you must. On social media, start learning and taking the privacy settings more seriously. There are many options that allow you to lock down your data to some degree.
  21. Make posts "private" on Facebook, especially if they have any personal details. If you didn't know the difference between "private" and "public" posts, learn this. And a friend says: "Stop playing Facebook quizzes."
  22. Stop digitally labeling your photos and other social posts with time and location. Make sure that data is removed before you post.
    (Putting it in the text description is better.)
  23. For crying out loud, stop posting totally public pictures of your vacation while you are vacation. Those pictures are very interesting to burglars. Wait until you get home, at least.
  24. Sorry, but stop sharing pictures of your children on social. (This is just my opinion. I know you might differ. But it makes me nervous.)
  25. Consider quitting social media altogether. Their business models are extremely hostile to privacy. You (and your private info) are the product, after all.
  26. A couple of obvious(?) last items. Make sure you're using a firewall and some sort of anti-virus software.
  27. Don't be the idiot who opens email attachments from strangers.

How many did you answer "I do that!" to? I scored 22, to be totally honest, but it'll be up to 27 soon. Answer below. Well, answer only if you have a high score, or if you use a pseudonym. I don't want hackers to know who they can hit up for an easy win!


Kick the tech giants out of your life

If you're like me, you feel a need to need to kick the tech giants out of your life. But how? Well, nobody said it would be easy, but I'm actually doing it!

Stop using Google Chrome. Google is contemptuous of your privacy and of free speech. I recommend Brave.

Stop using Google Search. And it tracks you after you search. I recommend DuckDuckGo, with results just as good as Google's 90+% of the time, in my experience.

Stop using Gmail. Look. Gmail is way overrated. And there are many, many other options out there which do not read your mail and extract marketable data.

Stop using Google Contacts and iCloud. Start managing your own contacts and data. There are lots of great tools to do this; it's not that hard.

Shields up on all the tech giants' websites and devices. Dive in to the innards of your settings (or options)—not just a few, all of them, because they like to hide things—and set your privacy settings to max.

Maybe quit social media. Facebook, Twitter, YouTube, and others have becoming increasingly censorious and contemptuous of your privacy. Make them less relevant by spending more time elsewhere, if you can't just quit for good.

Use a password manager. Stop letting your browser track your passwords.

And then, if you want to get serious:

Start learning Linux... Microsoft's problems with privacy and security are famous. Apple has its own too. Well, there are these things called "virtual machines" which make it easy (and free) to install and play with your very own Linux installation. Try it!

...then switch to Linux. If you know how to use Linux, why not make the switch to something more permanent? You can always dual-boot.


How I locked down my passwords

If you’re one of those people who uses the same password for everything, especially if it’s a simple password, you’re a fool and you need to stop. But if you’re going to maintain a zillion different passwords for a zillion different sites, how?

Password management software.

I’ve been using the free, open source KeePass, which is secure and it works, but it doesn’t integrate well with browsers, or let me save my password data securely in the cloud (or maybe better, on the blockchain). So I’m going to get a better password manager and set it up on all my devices. This is an essential to locking down my cyber-life.

One of the ways Facebook, LinkedIn, et al. insinuate themselves into our cyber-lives is by giving us an easy way to log in to other sites. But that makes it easier for them to track us everywhere. Well, if you install a decent password manager, then you don’t have to depend on social login services. Just skip them and use the omnipresent “log in with email” option every time. Your password manager will make it about as easy as social login systems did, but much more securely and privately.

You need a password manager

Password management software securely holds your passwords and brings them out, also securely, when you're logging in to websites in your desktop and handheld browsers. Decent browsers (like Brave) make your passwords available for the same purposes, if you let them, but there are strong reasons you shouldn't rely on your browser to act as a password manager.

Instead, for many years I've been using KeyPass, a free (open source) password manager that's been around for quite a while. The problem with KeyPass, as with a lot of open source software, is that it's a bit clunky. I never did get it to play nicely with browsers.

Password managers can, of course, automatically generate passwords and save them securely. They can also (but not all do) store your password database reasonably securely in the cloud (assuming you trust public clouds, which maybe you shouldn’t), so you don't have to worry about losing it; you can export a copy if you like. You can use it on all your devices with equal ease. The software will even let you grab your passwords with a fingerprint (or whatever) on your phone.

A very nice feature is that they'll securely store payment information, so your browser, websites, and operating system don't have to hold that information. That means you don't have to trust browsers, websites, and operating systems to manage this information securely. You only need to trust the password manager...

But can you trust password managers?

"Ah," you say, "but can you trust password managers?" That's not a bad or naive question at all; it's an excellent question. Consumer Reports, of all things, weighs in:

By default, LastPass, 1Password, and Dashlane store your password vault on their servers, allowing you to easily sync your data across devices. As a second benefit, if your computer crashes you won’t lose your vault.

But some people just really hate the idea of storing all their passwords on one site in the cloud—no matter what the company promises about its security measures, there's probably a bulls-eye painted on its encrypted back. If that sounds like you, it's possible to store your passwords locally.

Dashlane lets you do this by disabling the “Sync” feature in Preferences. This will delete your vault and its contents from the company’s servers. Of course, any further changes you make to your vault on your computer won’t show up on your other devices.

So what's my take? Hopefully there are layers of security protecting your password repository, not least of which is the (hopefully well-chosen) master password to your password database. While you do have to choose the professionalism and honesty of a cloud-based password manager, I think that's their business, so I want to trust them. But, but!

I ask myself: what is more likely, that they become compromised (for whatever reason—let your imagination run wild) or instead that I lose my master password or all copies of my password database or somehow allow myself to be hacked? I think both are fairly unlikely, first of all. I am certainly inclined to distrust myself, especially over the long haul. And frankly, the idea that a security business is compromised seems unlikely, since security is their business. But could a password manager server be hacked? That is, again, a really good question, and you wouldn't be the first to ask it. Password manager company OneLogin was actually hacked, and the hackers could actually "decrypt encrypted data," the company said. Holy crap!

Also, which is most disastrous? Losing my password file would not be a disaster; I can easily generate new passwords; that's just a pain, not a disaster. But a hacker getting hold of my passwords in the cloud (no matter how unlikely)? That could be pretty damn bad.

After all, especially as password manager companies grow in size (as successful companies are wont to do), they naturally can be expected to become a honeypot for hackers. Another example of a hacked password management company was LastPass, which was hacked in 2015, although without exposing their users' passwords.

If you're like me, you have libertarian concerns about having to trust external entities (and especially, giant corporations) with your entire digital lives. You might also not want to trust (future?) dangerous governments with the power to force those corporations to give access to your entire digital life, then we're no longer talking about anti-crime cybersecurity. Then it looks like you shouldn't (sensibly) put your password files in a corporate-managed cloud. Then I’m having to trust people a little too much for my comfort. So you should manage their location yourself.

Then there are two further problems. First, can you be sure that it is impossible for anyone at the password management software company to crack your password database, even if you host it yourself? (Do they have a copy? Can they get access to a copy? If they have access, are there any back doors?)

Second, there's the practical issue: Without the cloud, how do you sync your passwords between all your devices? That feature is the main advantage of hosting your passwords in the cloud. So how can you do it automatically, quickly, and easily?

What self-hosted password manager is really secure?

Several password managers use the cloud, but what is stored in the cloud is only the encrypted data. All the login and decryption happens on your local device. This is called zero-knowledge security, and it might be a suitable compromise for many. I have one main issue with this: Especially if the software is proprietary, we must simply trust the company that that is, in fact, how it works. But that's a lot to ask. So I'll pass on these. I'll manage the hosting of my own passwords, thanks very much.

Here are my notes on various password managers:

  1. These all feature zero-knowledge security but seem not to allow the user to turn off cloud sync (maybe they do, I just couldn't find evidence that they do): 1Password, Keeper Password Manager, LastPass, LogMeOnce, Password Boss, Zoho Vault.
  2. Sticky Password Premium: Allows home wifi sync of passwords, which is just fine. Fills out forms, works on all your devices...except Linux devices. Linux does not seem to be supported. Next!
  3. RoboForm: Doesn't have a sync feature without using their cloud service, but hey! It has a Linux version! Might work on Brave, since Brave is built on Chromium and there is a Chrome extension. This was enough for me to install it (and it worked!), but it seems to be rather clunky and there were a few different things that didn't inspire confidence.
  4. Dashlane: This has zero-knowledge security, which isn't a bad thing, but in addition, it allows you disable sync. Whenever you turn it off, the password data is wiped from their servers (so they say). You can turn it on again and sync your devices, then turn it off again. This is within my tolerance. Also, Dashlane has a Linux version. In other respects, Dashlane seems very good. I installed it and input a password. The UX is very inviting—even the Linux version. It's expensive, though: it's a subscription, and it's $40 for the first year (if you use an affiliate link, I guess), and $60 if you buy it direct, which I'm guessing will be the yearly price going forward. That's pretty steep for a password manager.
  5. EnPass: Here's something unusual—a password manager that goes out of its way to support all platforms, including Linux and even Chromebook (not that I'd ever own one of those). Rather than an expensive subscription, like Dashlane, EnPass's desktop app is free, while the mobile version costs $10, and that's a one-time fee. They don't store passwords in the cloud; passwords are stored locally, but EnPass has some built-in ways to sync the passwords (including by wi-fi, like Sticky Password). The autofill apparently doesn't work too well, while more expensive options like Dashlane do this better, and lacks two-factor authentication, which would be nice, and other "luxury" features.

Installation and next steps

Dear reader, I went with EnPass.

So how did I get started? Well, the to do list was fairly substantial. I...

  1. Made a new master password. I read up on the strategy for making a password that is both strong, easy to remember, and easy to type. I ended up inventing my own strategy. (Do that! Be creative!) So my master password ended up being a bit of a compromise. While it's very strong, it's a bit of a pain to type; but it's pretty easy to remember. Whatever master password you chose, just make sure you don't forget it, or you'll lose access to your password database.
  2. Installed EnPass on Windows and Linux and tested it to see if it worked well in both. It does (so far).
  3. Used EnPass to sync the two installations using a cloud service. (I'll be replacing this with Resilio Sync soon enough, so it'll be 100% cloudless.) I confirmed that if I change a password in one, it is synced in the other.
  4. Imported all my Keepass passwords, then tested a bit more on both platforms to make sure nothing surprising is happening. So far, so good. My only misgiving about EnPass so far is that there doesn't seem to be a keyboard shortcut to automatically choose the login info. I actually have to double-click on the item I want, apparently.
  5. Deleted all passwords from all browsers, and ensure that the browser doesn't offer to save new passwords. Let the password manager handle that from now on. (No need for the redundancy; that's a bit of extra and unnecessary risk.)
  6. Installed on my cell phone, synced (without issue), and tested. (Annoyingly, the Enpass iOS app doesn't do autofill, but I gather that's in the plans.)
  7. Installed app and browser plugin on my (Mac) laptop. No issues there either.
  8. Deleted Keepass data in all locations. That's now redundant and a needless risk as well.

I'm now enjoying the new, secure, and easy access to my passwords on all my devices. I'm also happy to be free of browser password managers.

This was installment four in my series on how I'm locking down my cyber-life.


How I set up private email hosting for my family

Here's how I actually set up my own private email hosting—sanger.io! I already finished choosing a private email hosting provider. So what was the next step?

I still had to choose a plan with my chosen provider (InMotion Hosting, which didn't pay me anything for this) and make it official. The details are uninteresting; anybody could do that part.

Now the hard work (such as it was) began. I...

(1) Read over the domain host's getting-started guide for email. InMotion's is here, and if you have a different host, they're bound to have some instructions as well. If you get confused, their excellent customer service department can hold your hand a lot.

(2) Created a sanger.io email address, since that's what they said to do first. In case you want to email me, my username is 'larry'. (Noice and simple, ey?) InMotion let me create an email address, and I was rather confused about how this could possibly work since I hadn't pointed any DNS, hosted by NameCheap, to InMotion.

(3) Chose one of the domain hosts's web app options. For a webmail app (InMotion gave me a choice of three), I went with Horde, which is, not surprisingly, a little bit clunky compared to Gmail, but so far not worse than ZohoMail; we'll see. Unsurprisingly, when I tried to send an email from my old gmail account to my new @sanger.io account, the latter didn't receive it. Definitely need to do some DNS work first...

(4) Pointed my domain name to the right mail server. In technical jargon, I created an MX record on my DNS host. This was surprisingly simple. I just created an MXE Record on NameCheap, my DNS host for sanger.io, and pointed it to an IP address I found on InMotion. So basically, I just found the right place to paste in the IP address, and it was done. Now I can send and receive email via sanger.io (at least via webmail).

(5) Created email addresses for my other family members. Very easy.

(6) Installed a desktop email client. Why? I wasn't using one before because I just used Gmail in a browser and Apple's mail app on my phone. I could keep using webmail (on InMotion) but a desktop client is apt to be nicer. I'd tell you which one I used, but I'm not confident it's particularly good.

(7) Installed a new email client for my phone. As I no longer trust or want to support Apple if I can at all help it, I wanted to stop using their email client. I paid $10 for a privacy-touting mail client which is quite good so far: Canary Mail.

(8) Change the mail address registered with the big, consequential apps and services. This is the most labor-intensive step, and the step I most dreaded. Sure, it was a pain. But it turns out it was tremendously satisfying to be able to tell them to stop using my wretched Gmail address and instead to start using my slick new permanent and personalized address. Was that fun? Heck yeah it was! Anyway, such apps and services include

  • The massive Internet and tech services: Google, Microsoft, Apple.
  • The big social media/community accounts: Facebook, Twitter, YouTube, Quora, Medium, LinkedIn.
  • Companies I pay money to: Amazon, Netflix, PayPal, Patreon, InMotion, GoDaddy, NameCheap, Heroku, LifeLock, The Great Courses, any other bills.
  • Important stuff: my employer, the bank, medical info systems/apps, dentist, Coinbase.
  • Family, friends, and work and business people. Send them the message three times spread over a month or two, because if they're like me, they ignore such emails or don't act on them right away, and some old aunt of mine will keep sending mail to my gmail address for years and years. (I haven't actually done this one yet, but will soon. Gmail makes exporting of all your relevant contact info surprisingly difficult.)

(9) Create a Gmail forwarder! Buh-bye, Google! No need even to visit your crappy, biased, would-be totalitarian service for email any longer.

(10) Clean up and consolidation. There are a zillion little consequences when you change your email on all these big services, and I expect I'll be dealing with the consequences (nothing major!) for a few days or weeks to come. Among the things I know I'll have to do: (a) Install and configure mail clients on my laptop and iPad, and in other ways get those other devices working as expected again. (b) Update various email clients with address book information, as needed. (c) Actually collect my contacts from Google and Apple (harder than it sounds). (d) Change entries in my password manager from @gmail.com to @sanger.io. (e) Actually, get a new password manager...but that's a whole nuther thang. (f) Get Microsoft and Google and whatever else to forget my contacts...ditto.

This was installment three in my series on how I'm locking down my cyber-life.


How I'm locking down my cyber-life

Drafted Jan. 4, 2019; updated occasionally since then; most recently updated May 11, 2019

Three problems of computer technology

My 2019 New Year's resolution (along with getting into shape, of course) is to lock down my cyber-life. This is for three reasons.

First, threats to Internet security of all sorts have evolved beyond the reckoning of most of us, and if you have been paying attention, you wonder what you should really be doing in response. My phone was recently hacked and my Google password reset. The threats can come from criminals, ideological foes and people with a vendetta or a mission (of whatever sort), foreign powers, and—of special concern for some of us—the ubiquitous, massively intrusive ministrations of the tech giants.

Second, the Silicon Valley behemoths have decided to move beyond mere moderation for objectively abusive behavior and shutting down (really obvious) terrorist organizations, to start engaging in viewpoint censorship of conservatives and libertarians. As a free speech libertarian who has lived online for much of my life since 1994, these developments are deeply concerning. The culprits include the so-called FAANG companies (Facebook, Apple, Amazon, Netflix, Google), but to that list we must add YouTube, Twitter, and Microsoft. Many of us have been saying that we must take ourselves out of the hands of these networks—but exactly how to do so is evidently difficult. Still, I'm motivated to try.

A third reason is that the same Big Tech corporations, with perhaps Facebook and Google being the worst offenders, have been selling our privacy. This is not only deeply offensive and something I refuse to participate in, it again puts my and my family's safety at risk, creating new "attack surfaces" (to use the information security jargon) that corporations must protect on our behalf. They may not do a good job of that. Similarly, governments have taken it upon themselves to monitor us systematically—for our safety, of course. But if you're like me, this again will make you feel less safe, not more, because we don't know what bad actors are at work in otherwise decent governments, we don't know what more corrupt governments might do with the information when we travel abroad, and we don't know the future shape of our own governments.

At the root of all problems is simply that the fantastic efficiency and simplicity of computer technology has been enabled via our participation in networks (especially cloud networks) and agreement to user agreements offered by massively rich and powerful corporations. Naturally, because what they offer is so valuable and because it is offered at reasonable prices (often, free), they can demand a great deal of information and control in exchange. This dynamic has led to us (most of us) shipping them boatloads of our data. That's a honeypot for criminals, authoritarians, and marketers, as I've explained in more depth.

The only thing we can do about this systematic monitoring and control is to stop letting the tech giants do it to us. That's why I want to kick them out of my life.

The threats to our information security and privacy undermine some basic principles of the decentralized Internet that blossomed in the 90s and boomed in the 00s. The Establishment has taken over what was once a centerless, mostly privacy-respecting phenomenon of civil society, transforming it into something centralized, invasive, risky, and controlling. What was once the technology of personal autonomy has enabled—as never before—cybercrime, collectivization, mob rule, and censorship.

A plan

Perhaps some regulation is order. But I don't propose to try to lead a political fight. I just want to know what can do personally to mitigate my own risks. I don't want to take the easy or even the slightly-difficult route to securing my privacy; I want to be hardcore, if not extreme.

I'm not sure of the complete list of things that I ought to do (I want to re-read Kevin Mitnick's excellent book The Art of Invisibility for more ideas), but since I started working on this privacy-protection project in January of 2019, I have collected many ideas and acted on almost all of them as of the current edition. I will examine some of these in more depth (in other blog posts, perhaps) before I take action, but others I have already implemented.

  1. Stop using Chrome. (Done.) Google collects massive amounts of information from us via their browser. The good news is that you don't have to use it, if you're among the 62% of people who do. I've been using Firefox; but I haven't been happy about that. The Mozilla organization, which manages the browser, is evidently dominated by the Silicon Valley left; they forced out Brendan Eich, one of the creators of Firefox and the JavaScript programming language, for his political views. Frankly, I don't trust them. I've switched to Eich's newer, privacy-focused browser, Brave. I've had a much better experience using it lately than I had when I first tried it a year or two ago and when it was still on the bleeding edge. Brave automatically blocks ads, trackers, third-party cookies, encrypts your connections—and, unlike Google, they don't have a profile about you (well, it never leaves your machine; the Brave company doesn't have access to it). As a browser, it's quite good and a pleasure to use. It also pays you in crypto for using it. There might be a few rare issues (maybe connected with JavaScript), but when I suspect there's a problem with the browser, I try whatever I'm trying to do in a locked-down version of Firefox, which is now my fallback. There's absolutely no need to use Chrome for anything but testing, and that's only if you're in Web development. By the way, the Brave iOS app is really nice, too.
  2. Stop using Google Search. (Done; needs more research though.) I understand that sometimes, getting the right answer requires that you use Google, because it does, generally, give the best search results. But I get surprisingly good results from DuckDuckGo (DDG), which I've been using for quite a while now. Like Brave and unlike Google, DDG doesn't track you and respects your privacy. You're not the product. It is easy to go to your browser's Settings page and switch. Here's a trick I've learned, for when DDG's results are disappointing (maybe 10% of the time for me): I use another private search StartPage (formerly Ixquick), which reportedly is based on Google search results, but I see differences on some searches, so it's not just a private front end for Google. You might prefer StartPage over DDG, but on balance I still prefer DDG. Still, I should research the differences some more, perhaps.
  3. Start using (better) password management software. Don't let your browser store your passwords. And never use another social login again. (Done.) You need to practice good "password hygiene." If you're one of those people who uses the same password for everything, especially if it's a simple password, you're a fool and you need to stop. But if you're going to maintain a zillion different strong passwords for a zillion different sites, how? Password management software. For many years I used the free, open source KeePass, which is secure and it works, but it doesn't integrate well with browsers, or let me save my password date securely in the cloud (or maybe better, on the blockchain). So I'm got a better password manager and set it up on all my devices. I switched to EnPass. This is essential to locking down my cyber-life. Along these lines, there are a couple of other things you should do, and which I did: set my browsers to stop tracking my passwords, and never let them save another one of my passwords. (But be aware that your ability to log in to a site is more secure if a site ue a cookie, called a token, to do so; that doesn't include a plain-text stored password. When a website asks me if I want to log in automatically, with checkbox in the login form, I say yes; but when a browser asks if I want it to remember my password, the answer is always no. Finally, one of the ways Facebook, LinkedIn, et al. insinuate themselves into our cyber-lives is by giving us an easy way to log in to other sites. But that makes it easier for them to track us everywhere. Well, if you install a decent password manager, then you don't have to depend on social login services (based on the OAuth standard). Just skip them and use the omnipresent "log in with email" option every time. (I haven't encountered a website that absolutely requires social media logins yet.) Your password manager will make it about as easy to log in as social media services did.
  4. Stop using gmail. (Done.) This was harder, and figuring out and executing the logistics of it was a chore—it involved changing all the accounts, especially the important accounts, that use my gmail address. I had wanted to do this for a while, but the sheer number of hours it was going to take to make the necessary changes was daunting (and I was right: it did take a quite a few hours altogether). But I was totally committed to taking this step, so I did. Another reason is that I figured that I could get a single email address for the rest of my life. So my new email address resides at sanger.io, a domain (with personalized email addresses) that my family will be able to use potentially for generations to come. Here's how I chose an email hosting service to replace Gmail. And here's how I set up private email hosting for my family.
  5. Stop using iCloud to sync your iPhone data with your desktop and laptop data; replace it with wi-fi sync. (Done.) If you must use a smartphone, and if (like mine) it's an iPhone, then at least stop putting all your precious data on Apple servers, i.e., on iCloud. It's very easy to get started. After you do that, you can go tell iTunes to sync your contacts, calendars, and other information via wi-fi; here's how. And I'm sorry to break it to you, but Apple really ain't all that. By the way, a few months after writing the above, I looked more carefully at the settings area of my iPhone for data stored in iCloud; it turns out I had to delete each category of data one at a time, and I hadn't done that yet. They don't make it easy to turn off completely, but I think I have now.
  6. Subscribe to a VPN. (Done.) This sounds highly difficult and technical on first glance, maybe, but in fact it's one of the easiest things you can do. I set mine up in minutes; the thing that took a few hours was researching which one to get. But why a VPN? Well, websites can still get quite a bit of info about you from your IP address and your ISP (or governments that request the data) can listen in on any data that happens to be unencrypted via your web connection. VPNs solve those problems by making your connection to the Internet anonymous. One problem with VPNs is that they slightly slow down your Internet connection; in my experience so far, it's rarely enough to make a diference. They also add a little new complexity to your life, and it is possible that the VPN companies are misrepresenting what they do with your data (some of the claims of some VPNs have been tested, though). But it's a great step to take if you're serious about privacy, if you don't mind the slight hit to your connection speed. A nice fallback is the built-in private windows in Brave that are run on the Tor network, which operates on a somewhat similar principle to VPNs.
  7. Get identity theft protection. (Done.) After my phone was hacked, I finally did something I've been meaning to do for a long time—subscribe to an identity theft protection service. If you don't know or care about identity theft, that's probably because you've never seen weird charges pop up on your card, or had your card frozen by your bank, or whatever. BTW, LifeLock's customer service isn't very good, in my experience, and also according to the FTC. There are others.
  8. Switch to Linux. (Done.) I used a Linux (Ubuntu) virtual machine for programming for a while. Linux is stable and usable for most purposes. It still has very minor usability issues for beginners. If you're up to speed, in which case, it's simply better than Windows or Mac, period, in almost every way. On balance the "beginner" issues aren't nearly as severe as those associated with using products by Microsoft and Apple. I've put Ubuntu on a partition on my workstation, and switched to that as my main work environment. I also gave away my Mac laptop and got a new laptop, on which I did a clean install, also of Ubuntu. Linux is generally more secure, gives the user more control, and most importantly does not have a giant multinational corporation behind it that wants to take and sell your information. Read more about how I switched to Ubuntu on my desktop and also my laptop.
  9. Quit social media, or at least nail down a sensible social media use policy. (Done.) I'm extremely ambivalent about my ongoing use of social media. I took a break for over a month (which was nice), but I decided that it is too important for my career to be plugged in to the most common networks. If I'm going to use them, I feel like I need to create a set of rules for myself to follow—so I don't get sucked back in. I also want to reconsider how I might use alternative social networks, like Gab (which has problems), and social media tools that make it easy both to post and to keep an easily-accessible archive of my posts. One of my biggest problems with all social media networks is that they make it extremely difficult to download and control your own friggin' data—how dare they. Well, there are tools to take care of that... Anyway, you can read more about how I settled on a social media use policy.
  10. Stop using public cloud storage. (Done.) "Now," you're going to tell me, "you're getting unreasonable. This is out of hand. Not back up to Dropbox, iCloud, Google Drive, Box, or OneDrive? Not have the convenience of having the same files on all my machines equally available? Are you crazy?" I'm not crazy. You might not realize what is now possible without the big "public cloud" services. If you're serious about this privacy stuff and you really don't trust Big Tech anymore—I sure don't—then yeah. This is necessary too. One option is Resilio Sync, moving files between your devices via deeply encrypted networks (via a modified version of the BitTorrent protocol), with the files never landing anywhere but on your devices. Another option is to use a NAS (network attached storage device), which is basically your very own always-on cloud server that only you can access, but you can access it from anywhere via an encrypted Internet connection. There are also open source Dropbox competitors that do use the cloud (the term to search for is "zero-knowledge encryption"), but which are arguably more secure; at any rate, you're in control of them. Yet another option is to run a cloud server from your desktop (if it's always on), using something like NextCloud. At first, I decided to go with Resilio Sync. Then I changed my mind, because it was a pain to be able to sync only when both devices are on, so I took the plunge and got a NAS after all. It took quite a while both to deliberate on what type of solution to go with (after Resilio), and to choose a specific NAS. It took quite a few hours altogether, but it turns out to be so useful. If you want to consider this more, check out my explanation of why they're such a good idea.
  11. Nail down a backup plan. (Done.) If you're going to avoid using so much centralized and cloud software, you've got to think not just about security but about backing up your data. I used to use a monster of a backup drive, but I wasn't even doing regularly-scheduled backups. In the end what I did was, again, to install a NAS. This provides storage space, making a complete backup of everything on my desktop (and a subset of files I put on laptop) and on the other computers in the house (that need backing up; perhaps not all of them do). It also keeps files instantly backed up a la Dropbox (see next item). But even this isn't good enough. If you really want protection against fire and theft, you must have an off-site backup. For that, I decided to bite the bullet and go with a relatively simple zero-knowledge encryption service, iDrive, that works nicely with my NAS system. It simply backs up the whole NAS. It bothers me that their software isn't open source (so I have to trust them that the code really does use zero-knowledge encryption), but I'm not sure what other reasonable solution I have, if I want off-site backup.
  12. Take control of my contact and friend lists. (Partly done.) I've been giving Google, Apple, and Microsoft too much authority to manage my contacts for me, and I've shared my Facebook and other friends lists too much. I'm not sure I want these contacts knowing my contacts and friends, period; the convenience and value I got out of sharing those lists was of very limited value to me, but evidently of great value to Big Tech. I don't know what they're doing with the information, or who they're sharing it with, really. Besides, if my friends play fast and loose with privacy settings, my privacy can suffer—and vice-versa. So I'm going to start maintaining my own contacts, thanks very much, and delete the lists I've given to Google and Microsoft. I'm glad I've already stopped putting this information on iCloud. The next step I need to do at present writing is to start using my NAS' built-in contacts server, which makes it possible to sync contact info across your devices using your own personal server. Then I'll permanently delete contact data from all corporate servers (as much as they generously let me do so).
  13. Stop using Google Calendar. (Done.) I just don't trust Google with this information, and frankly, Gcal isn't all that. I mean, it's OK. But they are clearly reading your calendar (using software, that is; that means the calendar data isn't encrypted on their servers, as it should be). So after I got my own NAS server, I was able to install a calendar server that could be accessed and synced from all of my devices. I had to transfer my data from Gcal to the server, which wasn't very hard. The hardest part was that I had to teach a colleague how to make appointments for me using the new system. Here are my notes on how I made the change.
  14. Study and make use of website/service/device privacy options. (In progress.) Google, Apple, Facebook, Twitter, YouTube, etc., all have privacy policies and options available to the user. It is time to study and regularly review them, and put shields up to maximum. Of course, it's better if I can switch to services that don't pose privacy threats; that's generally been my solution, but I have looked at quite a few privacy options and read privacy policies in order to do my due diligence about how my information is being used.
  15. Also study the privacy of other categories of data. Banking data, health data, travel data (via Google, Apple, Uber, Yelp, etc.), shopping data (Amazon, etc.)—it all has unique vulnerabilities that is important to be aware of. I'm not sure I've done all I can to lock it down. So I want to do that, even if (as seems very probably) I can't lock it all down satisfactorily, yet.
  16. Figure out how to change my passwords regularly, maybe. (Not started.) I might want to make a list of all my important passwords and change them quarterly everywhere, as a sort of cyber-hygiene. Why don't we make a practice of this? Because it's a pain in the ass and most people don't know how to use password management software, that's why. Besides, security experts actually discourage regular password changing, but that's mainly because most people are bad at making and tracking secure passwords. Well, if you use password managers, that part isn't so hard. But it's also because we really don't have a realistic plan to do it; maybe the main thing to do is to regularly change a few important passwords every so often, not all of them. I'll figure that out.
  17. Consider using PGP, the old encryption protocol (or an updated version, like GNU Privacy Guard) with work colleagues and family who are into it. (Not started.) Think about this: when your email makes the transit from your device to its recipient's device, it passes through quite a few other machines. Hackers have ways of viewing your mail at different points on its journey. Theoretically, they could even change it, and you (and its recipient) would be none the wiser. Now, don't freak out, and don't get me wrong; I'm not saying email (assuming the servers in between you and your recipients use the standard TLS, or Transport Layer Security, protocol) isn't perfectly useful for everyday purposes. But if you're doing anything reallyimportant and sensitive, either don't use email or use a higher encryption standard, because basic email is insecure. Now, I'm aware that some think PGP is outmoded or too complex (that's why I never got into it, to be honest), but the general idea of encrypting your email more strongly isn't going out of style, and improvements on the PGP protocol are still actively maintained. Still, when information security might matter quite a bit, then it might be easier to do what I'm doing now with my boys: using a chat tool with end-to-end encryption built in.
  18. Moar privacy thangs. Look into various other things one can do to lock down privacy. Consider the new Purism Librem 5 phone. Look into a physical security key for laptop and desktop. Encrypt my hard drives. Encrypt the drives on the NAS. Etc., etc.

What have I left out?

Are you going to join me in this push toward greater privacy and autonomy? Let me know—or, of course, you can keep it to yourself.


How the government can monitor U.S. citizens

Just what tools do American governments—federal, state, and local—have to monitor U.S. citizens? There are other such lists online, but I couldn't find one that struck me as being quite complete. This list omits strictly criminal tracking, because while criminals are citizens, actual crime obviously needs to be tracked.

  1. First, there's what you yourself reveal: the government can use whatever information you yourself put into the public domain. For some of us (like me), that's a heck of a lot of information.
  2. Government also tries to force tech companies to reveal our personal information, ostensibly to catch terrorists and criminals. The FBI and NSA have both been in the news about this.
  3. The NSA famously tracks our email and phone calls. They might be looking for terrorism and crime, but we're caught in the net too.
  4. The IRS, obviously, tracks your income, business information, and much else. That certainly qualifies as government monitoring.
  5. State, local, and school district tax systems do the same.
  6. The FBI's NSAC (National Security Branch Analysis Center) has hundreds of millions of records about U.S. citizens, many perfectly law-abiding.
  7. The State Dept., Homeland Security, and others contribute to systems that include biometric information on some 126 million citizens—that means fingerprints, photos, and other biographical information.
  8. For a small number of citizens—740,000 to 10 million, depending on the system—there is a lot more information available, not just because the people are actual or terrorists or criminals, but only because they are suspected of such activity. If someone in government with the authority thinks you fall into broad categories that make you possibly dangerous, they can start collecting a heck of a lot more information about you.
  9. The Census Bureau tracks our basic demographic information every ten years.
  10. U.S. school students in at least 41 states are tracked by Statewide Longitudinal Data Systems, including demographics, enrollment status, test scores, preparedness for college, etc.
  11. Many and various public cameras, including license plate readers, are used by many local authorities, mainly for crime prevention.
  12. Monitoring by police will be easier in the near future: As an expert on the subject, law professor Bill Quigley, puts it, "Soon, police everywhere will be equipped with handheld devices to collect fingerprint, face, iris and even DNA information on the spot and have it instantly sent to national databases for comparison and storage."
  13. The internet of things will be another avenue in which government will increasingly be able to view our habits.

So...explain to me again how we have a right to privacy under the Fourth Amendment.

By the way, it is a conceptual mistake to suppose that there is any one person or group of people who have access to (and care about) the information in all of these databases. How the databases are used is carefully circumscribed by law, obviously, and just because the information is in a database, it doesn't follow that there has been a privacy violation. But it does raise concerns in the aggregate: the extent to which we are monitored might be a problem even if most programs are individually constitutionally justifiable.

In short, is there any point at which we say "enough is enough"? Or do we grudgingly give government technical access into every area of our lives and hope that the law controls how the information is being used?

In the comments, please let me know what I've missed and I can do updates.

Sources: Common Dreams, ACLU, Ed.gov, Forbes, Wired, Guardian, and my own experience working at the Census Bureau long ago.


Why Edward Snowden deserves a pardon, explained in 10 easy steps

Let me put this briefly and simply. The government should not be snooping on us. But they started anyway. That was wrong and unconstitutional. When they did, they made their snooping program secret. That was wrong twice over, a cover-up of a wrong. Then they actually lied about the existence of the program to Congress, and the bureaucrat who perjured himself doing so is getting off scot free. Trebly wrong. And now when a low-level contractor, at tremendous risk to himself, courageously blows the whistle on this operation, he is threatened with extradition and very severe prosecution, rather than being pardoned. Quadruply wrong!

1. The Fourth Amendment is clear: the government may not indiscriminately snoop our private things. In the language of the Amendment, American citizens have the right to be "secure" in their "effects" against "unreasonable searches" except "upon probable cause" and a specification of the things to be searched.

2. But indiscriminate snooping is just what PRISM does. A surveillance program that regularly searches private telephone call metadata, as well as private Internet data, of virtually all American citizens seems on its face to vi0late the Fourth Amendment.

3. So PRISM is illegal and wrong. It sure looks unconstitutional.

4. And we had a right to know about it. Why wasn't the decision to start PRISM put before an open, public Congress? It was a decision with enormous potential consequences; it seems obvious that the American people had a right to decide whether it would be surveilled to this extent.

5. So it is doubly wrong that the PRISM program was hidden from us. We should have been able to voice our concerns to our representatives and the President when this program was started. But because it was implemented in secret, we couldn't. When it comes to how the entire population of the U.S. is treated--not just terrorism suspects--we have a constitutional republican democracy, not a secret government.

6. James Clapper's perjury is outrageous. When National Director of Intelligence James Clapper was asked by Sen. Ron Wyden on March 12, 2013, "Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?" and he answered, "No, sir … not wittingly," he was not merely committing perjury. He was lying about a program that Americans had a right to know about, that it was important that they know about, because it affects all Americans' constitutional rights, and they have a right to assess and object to just such a program.

7. Edward Snowden is a hero for revealing the facts about PRISM. If it hadn't have been for the courageous whistleblowing of Mr. Snowden, we would still be ignorant of this massive violation of our constitutional rights. Considering the huge risks to himself, his whistleblowing was simply heroic.

8. It is shockingly and trebly wrong that Edward Snowden is being persecuted for whistleblowing. It is true that, in leaking classified documents, Edward Snowden broke the law. But he did so in order to reveal a much more dangerous sort of official lawbreaking. He arguably had a moral obligation--and, fortunately, the courage--to do so, since he observed that no one else in the government was making the program public. It is outrageous that a person who reveals a wrong perpetrated by a supposedly open and democratic government is persecuted for it by that same government.

9. Instead, those responsible for PRISM--and for making it secret--should be made to answer for their actions. Even if they are not punished, they should be made to answer publicly for their clear abuse of their public trust. They should not have made this unconstitutional program, and just as importantly, it should not have been kept secret from the American people.

10. It will be quadruply wrong if Edward Snowden is not pardoned. "Often the best source of information about waste, fraud, and abuse in government is an existing government employee committed to public integrity and willing to speak out. Such acts of courage and patriotism ... should be encouraged rather than stifled." Who said this, and where? A libertarian defending Edward Snowden in Reason, perhaps? Not exactly. It was on the Obama transition team's website in 2009, back when Obama was being lauded as a "friend" to whistleblowers.

President Obama should pardon Snowden and, probably, Clapper too--and, on the assumption that they had laudible intentions, everyone involved in the creation of the program.

And then President Obama should actually encourage a public debate, and Congressional vote, on whether PRISM should continue to exist.

Wouldn't that be something.


A Defense of Modest Real Name Requirements

Lunchtime speech at the Harvard Journal of Law & Technology 13th Annual Symposium: Altered Identities, Harvard University, Cambridge, Massachusetts, March 13, 2008.

I. Introduction

Let me say up front, for the benefit of privacy advocates, that I agree entirely that it is possible to have an interesting discussion and productive collaborative effort among anonymous contributors, and I support the right to anonymity online, as a general rule. But, as I'm going to argue, such a right need not entail a right to be anonymous in every community online. After all, surely people also have the right to participate in communities in which real-world identities are required of all participants—that is, they have a right to join voluntary organizations in which everyone knows who everyone else really is. There are actually quite a few such communities online, although they tend to be academic communities.

Before I introduce my thesis, I want to distinguish two claims regarding anonymity: first, there is the claim that personal information should be available to the administrators of a website, but not necessarily publicly; and second, there's the claim that real names should appear publicly on one's contributions. I will be arguing for the latter claim, that real names should appear publicly.

But actually, I would like to put my thesis not in terms of how real names should appear, but instead in terms of what online communities are justified in requiring. Specifically in online knowledge communities—that is, Internet groups that are working to create publicly-accessible compendia of knowledge—organizers are justified in requiring that contributors use their own names, not pseudonyms. I maintain that if you want to log in and contribute to the world’s knowledge as part of an open, community project, it’s very reasonable to require that you use your real name. I don't want, right now, to make the more dramatic claim that we should require real names in online knowledge communities—I am saying merely that it is justified or warranted to do so.

Many Internet types would not give even this modest thesis a serious hearing. Most people who spend any time in online communities regard anonymity, or pseudonymity, as a right with very few exceptions. To these people, my love of real names makes me anathema. It is extremely unhip of me to suggest that people be required to use their real names in any online community. But since I have never been or aspired to be hip, that’s no great loss to me.

What I want to do in this talk is first to introduce the notion of an Internet knowledge community, and discuss how different types handle anonymity as a matter of policy. Then I will address some of the main arguments in favor of online anonymity. Finally, I will offer two arguments that it is justified to require real names for membership in online knowledge communities.

II. Some current practices in online knowledge communities

First, let me give you a definition for a phrase I'll be using throughout this talke. By online knowledge community I mean any group of people that gets organized via the Internet to create together what at least purports to be reliable information, or knowledge. And I distinguish between a community that purports to create reliable information from a community that is merely engaging in conversation or mutual entertainment. So this excludes social networking sites like MySpace and FaceBook, as well as most blogs, forums, and mailing lists. Digg.com might be a borderline case; calling that link rating website a “knowledge community” is again straining the definition, because I’m not sure that many people really purport to be passing out knowledge when they vote for a Web link. They’re merely stating their opinion about what they find interesting; that’s something different from offering up knowledge, it seems to me.

I want to give you a lot of examples of online knowledge communities, because I want to make a point. The first example that comes to mind, I suppose, would be Wikipedia, but also many other online encyclopedia projects, such as the Citizendium, Scholarpedia, Conservapedia, among many others (and these are only in English, of course). Then there are many single-subject encyclopedia projects, such as, in philosophy, the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy; in biology, there is now the Encyclopedia of Life; in mathematics, there is MathWorld; in the Earth Sciences, there is the Encyclopedia of Earth; and these are only a few examples.

But that’s just the encyclopedia projects. There are many other kinds of online knowledge communities. Another sort would be the Peer to Patent Project, started by NYU law professor Beth Noveck. Perhaps you could consider as an online knowledge community the various pre-print, or e-print, services, most notably arXiv, which has hundreds of thousands of papers in various scientific disciplines. This might be straining the definition, however. If you consider a pre-print service an online knowledge community, then perhaps you should consider any electronic journal such a community; indeed, perhaps we should, but I won’t argue the point. Anyway, I could go on multiplying examples, but I think it would get tedious, so I’ll stop there.

The examples I've given so far have been mostly academic and professional communities. And here I finally come to my point: out of all the projects named, the only ones in which real names are not required, or at least not strongly encouraged, are Wikipedia and Conservapedia. This, of course, proves only that when academics and professionals get online, they tend to use their real names, which shouldn’t be surprising to anyone.

But there are actually quite a few other online knowledge communities that don’t require the use of real names. I have contributed a fair bit to one that is a very useful database of Irish traditional music—it’s got information about tunes and recordings--it's called TheSession.org. There are many other hobbyist communities that don’t require real names; just think of all the communities about games and fan fiction. Of course, then there are all the communities to support open source software projects. I doubt a single one of those requires the use of real names.

I haven't had time to do (or even find) a formal study of this, but I suspect that, as a general rule, academic projects either require or strongly encourage real names, while most other online knowledge communities do not. This should be no great surprise. Academics are used to publishing under their real names, but this is mostly for professional reasons; with the advent of the Internet, many other people are contributing to the world's knowledge, in various Internet projects, but they have no professional motivation to use their own real names. For some people--for example, a lot of Wikipedians--privacy concerns far outweigh any personal benefit they might get for putting their names on their contributions.

So, how should we think about this? Is it justifiable to demand anonymity in every online community, on grounds of privacy, or any other grounds? I don't think so.

III. Some arguments for anonymity

Next, let's consider some arguments for anonymity as a policy, and briefly outline some replies to them. By no means, of course, do I claim to have the last word here. I know I am going very quickly over some very complex issues.

A. The argument from the right to privacy. The most important and I think most persuasive argument that anonymous or pseudonymous contribution should be permitted in online communities is that this protects our right to privacy. The use of identities different from one’s real-world identity helps protect us against the harvesting of data by governments and corporations. Especially in open Internet projects, a sufficiently sophisticated search can produce vast amounts of data about what topics people are interested in, and much other information potentially of interest to one's employers, corporate competitors, criminals, government investigators, and marketers. This is a major and I think growing concern about Google, as well as many online communities like MySpace and FaceBook. Like many people, I share those concerns, even though personally my life is an open book online--maybe too open. Still, I think privacy is an important right.

But I want to draw a crucial distinction here. There is a difference between, on the one hand, using a search engine, or sharing messages, pictures, music, and video with one's friends and family, and on the other hand, adding to a database that is specifically intended to be consulted by the world as a knowledge reference. The difference is very obvious if you think about it. Namely, there is simply no need to make your name or other information publicly available, for you to do all the former activities. When you are contributing to YouTube, for example, you can achieve your aims, and others can enjoy your productions, regardless of the connection or lack thereof between your online persona and your real-world identity. So, in those contexts, the connection between your persona and your identity should be strictly up to you. For example, whether you let a certain other person, or a marketer, see your FaceBook profile also should be strictly up to you. These online services have become extensions of our real lives, the details of which have been and generally should remain private, if we want them to be.

We have a clear interest in controlling information about our private lives; we have that interest, of course, because it can be so easily abused, but also because we want to maintain our own reputations without having the harsh glare of public knowledge shone on everything we do. Lack of privacy changes how we behave, and indeed we might behave more authentically, and we might have more to offer our friends and family, if we can be sure that our behavior is not on display to the entire world.

I've tried to explain why I support online privacy rights in most contexts. But I say that there is a large difference between social networking communities like MySpace and FaceBook, on the one hand, and online knowledge communities like Wikipedia and the Citizendium, on the other hand. When you contribute to the latter communities, the public does have a strong interest in knowing your name and identity when you contribute. This is something I will come back to in the next part of this talk, when I give some positive arguments for real names requirements.

B. The argument from the freedom of speech. But back to the arguments for anonymity. A second argument has it that not having to reveal who you are strengthens the freedom of speech. If you can speak out against the government, or your employer, or other powerful or potentially threatening entities, without fear of repercussions, that allows you to reveal the full truth in all its ugliness. This is, of course, the classic libertarian argument for anonymous speech.

The most effective reply to this is to observe that, in general, there is no reason that online collaborative communities should serve as a platform for people who want to publish without personal repercussions. There are and will be many other platforms available for that. Indeed, specific online services, such as WikiLeaks, have been set up for anonymous free speech. Long may they flourish. Moreover, part of the beauty of the classical right to freedom of speech is that it provides maximum transparency. Anyone can say anything—but then, anyone else can put the first person’s remarks in context by (correctly) characterizing that person. Maximum transparency is the best way to secure the benefits of free speech.

I suspect it is a little disingenuous to suggest that anonymous speech is generally conducive to the truth in online knowledge communities. The WikiScanner, and the various mini-scandals it unearthed, actually helps to illustrate this point. It illustrated something that was perfectly obvious to anyone familiar with the Wikipedia system: that persons with a vested interest in a topic can and do make anonymous edits to information about that topic on Wikipedia. They are not telling truth to power under the cover of anonymity. Rather, they are using the cover of anonymity to obscure the truth. They would behave differently, and would be held to much more rigorous standards, if their identities were known. I want to suggest, as I'll elaborate later, that full transparency--including knowledge of contributor identities--is actually more truth-conducive than a policy permitting anonymity.

IV. Two reasons for real name requirements

Now I am going to shift gears, and advance two positive arguments for requiring real names in online knowledge communities. One argument is political: it is that communities are better governed if their members are identified by name. The other argument is epistemological: it is that the content created by an "identified" community will be more reliable than content created by an "anonymous" community.

A. The argument from enforcement. The first argument is one that I think you legal theorists might be able to sink your teeth into. Let me present it in a very abstract way first, and then give an example. Consider first that if you cannot identify a person who breaks a rule, it is impossible to punish that person, or enforce the rule in that case. Forgive me for getting metaphysical on you, but the sort of entity that is punished is a person. If you can't identify a specific person to punish, you obviously can't carry out the punishment. This is the case not just if you can't capture the perpetrator, but also if you have captured him but you can't prove that he really is the perpetrator. That's all obvious. But it's also the case that you can't carry out the punishment if the perpetrator is clearly identifiable in one disguise, but then changes to another disguise.

So far so good, I hope. Next, consider a principle that I understand is sometimes advanced in jurisprudence, which is that there is no law, in fact, unless it is effectively enforced. A law or rule on the books that is constantly broken and never enforced is not really, in some full-blooded important sense, a law. For example, the 55-mile-per-hour speed limit might not be a full-blooded rule, since you can drive 56 miles per hour in a 55 mile per hour zone, and never get a ticket. Obviously I am not denying that the rule is on the books; obviously it is. I am merely saying that the words on the books lack the force of law.

Now suppose, if you will, that in your community, your worst offenders can only rarely be effectively identified. You have to go to superhuman lengths to be able to identify them. In that case, you've got no way to enforce your rules: your hands are tied by your failure to identify your perpetrators effectively. But then, if you cannot enforce your rules, your rules lack the force of law. In a real sense, your community lacks rules.

I want to suggest that the situation I've just described abstractly is pretty close to the situation that Wikipedia and some other online communities are in. On Wikipedia, you don't have to sign in to make any edits. Or, if you want to sign in, you can make up whatever sort of nonsense name you like; you don't have to supply a working e-mail address, and you can make as many Wikipedia usernames as your twisted heart desires. Of course, no one ever asks what your real name is. In fact, Wikipedia has a rule according to which you can be punished for revealing the real identity behind a pseudonym.

This all means that there is no effective way to identify many rulebreakers. Now, there is, of course, a way to identify what IP address a rulebreaker uses, but as anyone who knows about IP addresses knows, you can't match an IP address uniquely to a person. Sometimes, many people are using the same address; sometimes, one person is constantly bouncing around a range of addresses, and sharing that range with other people. So there is often collateral damage when you block the IP address, or a range of addresses, of a perpetrator. Besides, anyone with the slightest bit Internet sophistication can quickly find out how to get around this problem, by using an anonymizer or proxy.

That there is no effective way to identify some rulebreakers is a significant practical problem on Wikipedia, in fact. Wikipedians complain often and bitterly about anonymous, long-term, motivated trouble-makers who use what are called "sockpuppets"--that is, several accounts controlled by the same person. Indeed, this is Wikipedia's most serious problem, from the point of view of true-believer Wikipedians.

In this way, Wikipedia lacks enforceable rules because it permits anonymity. I think it's a serious problem that it lacks enforceable rules. Here's one way to explain why. Suppose that we say that polities are defined by their rules. If that is the case, then Wikipedia is not a true polity. In fact, no online community can be a polity if permits anonymous participation. But why care about being a polity? For one thing, Wikipedia and other online communities, which typically permit anonymity, are sometimes characterized as a sort of democratic revolution. On my view, this is an abuse of the term "democratic." How can something be democratic if it isn't even a polity?

There is another, shorter argument that anonymous communities cannot be democratic. First, observe that if it is not necessary to confirm a person’s identity, the person may vote multiple times in a system in which voting takes place. Moreover, if the identities of persons engaged in community deliberation need not be known, one person may create the appearance of a groundswell of support for a view simply by posting a lot of comments using different identities. But, for voting and deliberation to be fair and democratic, each person’s vote, and voice, must count for just one. Therefore, a system that does not take cognizance of identities is inherently unfair and undemocratic. I think anonymous communities cannot be fair and democratic.

But why should we care about our online communities being fair, democratic polities? Perhaps their governance is relatively unimportant. When it comes to whether a link is placed on the front page of Digg.com, or what videos are highly rated on YouTube, does it really matter if it's not all quite on the up-and-up?

Maybe not. I am not going to argue about that now. But matters are very different, I want to maintain, with online knowledge communities, which is the subject of this paper. Knowledge communities, I think, must be operated as fair, democratic, and mature polities, if they are open to all sorts of contributors and they purport to contain reliable information that can be used as reference material for the world. It makes a difference, I claim, if an online community purports to collect knowledge, and not just talk and share media among friends and family.

Why does it matter if a community collects knowledge? First, it's because knowledge is important; we use information to make important decisions, so it is important that our information be reliable. If you are not convinced, consider that many people now believe that false information caused the United States to go to war in Iraq. Consider how many innocent people are in prison because of bad information. These days, two top issues for scientists are also political issues: global warming and teaching evolution in the schools. Scientists are very concerned that persons in politically-powerful positions do not have sufficient regard for well-established knowledge. Whatever you think of these specific cases, all of which are politically charged, it seems clear enough that there is no shortage of examples that demonstrate that we do, as a society, care very much that our information be reliable--that we do not merely have random unjustified beliefs, but that we know.

The trouble, of course, is that as a society--especially as a global Internet society--we do not all agree on what we know. Therefore, when we come together online from across the globe to create collections of what call knowledge, we need fair, sensible ways to settle our disputes. That means we must have rules; so we must have a mature polity that can successfully enforce rules. And, to come back to the point, that means we must identify the members of these polities; we are well justified to disallow anonymous membership.

B. The epistemological argument. Finally, I want to introduce briefly an epistemological argument for real names requirements, which is distinguishable from the argument which I just introduced, even though it had epistemological elements too. Now I want to argue that using our real identities not only makes a polity possible, it improves the reliability of the information that the community outputs.

Perhaps this is not obvious. As I said earlier, some people maintain that knowledge is improved when people are free to "speak truth to power" from a position of anonymity. But, as I said, I suspect that in online communities like Wikipedia, a position of anonymity is used specifically to obscure the truth more than reveal it. Now, in all honesty, I have to admit that this might be rather too glib. After all, most anonymous contributors to Wikipedia aren't trying to reveal controversial truths, or cover them up; they are simply adding information, which is more or less correct. Their anonymity doesn't shield them from wrongdoing, it merely shields their privacy. As a result, why not say that the vast quantity of information found in Wikipedia--which is very useful to a lot of people--is directly the result of Wikipedia's policy of anonymity? In that case, anonymity actually increases our knowledge--at least the sheer quantity of our knowledge.

Can I refute that argument? I'm not sure I can, nor would I want to if it is correct. The point being made is empirical, and I don't know what the facts are. If anonymity does in fact have that effect, hooray for anonymity. I merely want to make a few relevant points.

I think that in the next five to ten years, we will see whether huge numbers of people are also willing to come together to work under their own real names. I don't pretend to be unbiased on this point, but I think they will be. I don't think that anonymity is badly wanted or needed by the majority of the potential contributors to online knowledge communities in general. Having observed these communities for about fifteen years, my impression is that people get involved because they love the sense of excitement they get from being part of a growing, productive community. My guess is that anonymity is pretty much irrelevant to that excitement.

Regardless of the role of anonymity in the growth of online resources, a real names policy has a whole list of specific epistemological benefits that a policy of anonymity cannot secure. Consider a few such benefits.

First, the author of a piece of work will be more careful than if she puts her real name on it: her real-world reputation is on the line. And I suppose being more careful will lead to more reliable information. This is quickly stated, and very plausible, but it is a very important benefit.

Second, a community all of whose members use their real names will, as a whole, have a better reputation than one that is dominated by pseudonymous people. We naturally trust those who are willing to tell us who they are. As a result, the community naturally has a reputation to live up to. There are no similar expectations of good quality from an anonymous community, and hence no high expectations to live up to.

Third, it is much harder for partisans, PR people, and others to use the system to cover up unpleasant facts, or to present a one-sided view of a complex situation. When real names are used, the community can require the subjects of biographies and the principals of organizations to act as informants. The Citizendium does this. Wikipedia can't, because this would require that people identify themselves.

V. Conclusion

I'm going to wrap up now. I've covered a lot of ground and I went over some things rather fast, so here is a summary.

I began by defining "online knowledge community," and showing with a number of examples that online academic communities tend to use (or strongly emphasize the use of) real names. Other sorts of online communities generally permit or encourage anonymity, because there is no career benefit to being identified, while there is a definite interest in privacy. I considered two main arguments (though I know there are others) for permitting anonymity as a matter of policy. One argument starts from the premise that we have an interest in keeping our personal lives private; I admit that premise, but I say that, when it comes to knowledge communities in particular, society has an overriding interest in knowing your identity. Another argument is a version of the classical libertarian argument for anonymous speech. I grant that society needs venues in which anonymous speech can take place; I simply deny that all online knowledge communities need play that role. Besides, anonymity is probably used more as a way to burnish public images than it is to "speak truth to power."

In the second half of the paper, I considered two main arguments (though again, there are others) for requiring real names as matter of policy in online knowledge communities. In the first, I argued that rules cannot be effectively enforced when rule-breakers cannot be identified. This is a problem, because we would like online knowledge communities to be fair and democratic polities; but when community members cannot be uniquely identified, this violates the principle of one person, one voice, one vote. Then I argued that the requirement of real names actually increases the reliability of a community's output. Since we want the output of knowledge communities, in particular, to be maximally reliable, we are well justified in requiring real names in such communities.


A compromise position that I favor would involve requiring real users’ names to be visible to other contributors; allowing them to mask their real names to non-contributors; and legally forbidding the use of our database to mine personal information. This compromise does not settle the theoretical issue discussed in the arguments that follow, of course.