How I'm locking down my cyber-life

Drafted Jan. 4, 2019; updated occasionally since then; most recently updated May 11, 2019

Three problems of computer technology

My 2019 New Year's resolution (along with getting into shape, of course) is to lock down my cyber-life. This is for three reasons.

First, threats to Internet security of all sorts have evolved beyond the reckoning of most of us, and if you have been paying attention, you wonder what you should really be doing in response. My phone was recently hacked and my Google password reset. The threats can come from criminals, ideological foes and people with a vendetta or a mission (of whatever sort), foreign powers, and—of special concern for some of us—the ubiquitous, massively intrusive ministrations of the tech giants.

Second, the Silicon Valley behemoths have decided to move beyond mere moderation for objectively abusive behavior and shutting down (really obvious) terrorist organizations, to start engaging in viewpoint censorship of conservatives and libertarians. As a free speech libertarian who has lived online for much of my life since 1994, these developments are deeply concerning. The culprits include the so-called FAANG companies (Facebook, Apple, Amazon, Netflix, Google), but to that list we must add YouTube, Twitter, and Microsoft. Many of us have been saying that we must take ourselves out of the hands of these networks—but exactly how to do so is evidently difficult. Still, I'm motivated to try.

A third reason is that the same Big Tech corporations, with perhaps Facebook and Google being the worst offenders, have been selling our privacy. This is not only deeply offensive and something I refuse to participate in, it again puts my and my family's safety at risk, creating new "attack surfaces" (to use the information security jargon) that corporations must protect on our behalf. They may not do a good job of that. Similarly, governments have taken it upon themselves to monitor us systematically—for our safety, of course. But if you're like me, this again will make you feel less safe, not more, because we don't know what bad actors are at work in otherwise decent governments, we don't know what more corrupt governments might do with the information when we travel abroad, and we don't know the future shape of our own governments.

At the root of all problems is simply that the fantastic efficiency and simplicity of computer technology has been enabled via our participation in networks (especially cloud networks) and agreement to user agreements offered by massively rich and powerful corporations. Naturally, because what they offer is so valuable and because it is offered at reasonable prices (often, free), they can demand a great deal of information and control in exchange. This dynamic has led to us (most of us) shipping them boatloads of our data. That's a honeypot for criminals, authoritarians, and marketers, as I've explained in more depth.

The only thing we can do about this systematic monitoring and control is to stop letting the tech giants do it to us. That's why I want to kick them out of my life.

The threats to our information security and privacy undermine some basic principles of the decentralized Internet that blossomed in the 90s and boomed in the 00s. The Establishment has taken over what was once a centerless, mostly privacy-respecting phenomenon of civil society, transforming it into something centralized, invasive, risky, and controlling. What was once the technology of personal autonomy has enabled—as never before—cybercrime, collectivization, mob rule, and censorship.

A plan

Perhaps some regulation is order. But I don't propose to try to lead a political fight. I just want to know what can do personally to mitigate my own risks. I don't want to take the easy or even the slightly-difficult route to securing my privacy; I want to be hardcore, if not extreme.

I'm not sure of the complete list of things that I ought to do (I want to re-read Kevin Mitnick's excellent book The Art of Invisibility for more ideas), but since I started working on this privacy-protection project in January of 2019, I have collected many ideas and acted on almost all of them as of the current edition. I will examine some of these in more depth (in other blog posts, perhaps) before I take action, but others I have already implemented.

  1. Stop using Chrome. (Done.) Google collects massive amounts of information from us via their browser. The good news is that you don't have to use it, if you're among the 62% of people who do. I've been using Firefox; but I haven't been happy about that. The Mozilla organization, which manages the browser, is evidently dominated by the Silicon Valley left; they forced out Brendan Eich, one of the creators of Firefox and the JavaScript programming language, for his political views. Frankly, I don't trust them. I've switched to Eich's newer, privacy-focused browser, Brave. I've had a much better experience using it lately than I had when I first tried it a year or two ago and when it was still on the bleeding edge. Brave automatically blocks ads, trackers, third-party cookies, encrypts your connections—and, unlike Google, they don't have a profile about you (well, it never leaves your machine; the Brave company doesn't have access to it). As a browser, it's quite good and a pleasure to use. It also pays you in crypto for using it. There might be a few rare issues (maybe connected with JavaScript), but when I suspect there's a problem with the browser, I try whatever I'm trying to do in a locked-down version of Firefox, which is now my fallback. There's absolutely no need to use Chrome for anything but testing, and that's only if you're in Web development. By the way, the Brave iOS app is really nice, too.
  2. Stop using Google Search. (Done; needs more research though.) I understand that sometimes, getting the right answer requires that you use Google, because it does, generally, give the best search results. But I get surprisingly good results from DuckDuckGo (DDG), which I've been using for quite a while now. Like Brave and unlike Google, DDG doesn't track you and respects your privacy. You're not the product. It is easy to go to your browser's Settings page and switch. Here's a trick I've learned, for when DDG's results are disappointing (maybe 10% of the time for me): I use another private search StartPage (formerly Ixquick), which reportedly is based on Google search results, but I see differences on some searches, so it's not just a private front end for Google. You might prefer StartPage over DDG, but on balance I still prefer DDG. Still, I should research the differences some more, perhaps.
  3. Start using (better) password management software. Don't let your browser store your passwords. And never use another social login again. (Done.) You need to practice good "password hygiene." If you're one of those people who uses the same password for everything, especially if it's a simple password, you're a fool and you need to stop. But if you're going to maintain a zillion different strong passwords for a zillion different sites, how? Password management software. For many years I used the free, open source KeePass, which is secure and it works, but it doesn't integrate well with browsers, or let me save my password date securely in the cloud (or maybe better, on the blockchain). So I'm got a better password manager and set it up on all my devices. I switched to EnPass. This is essential to locking down my cyber-life. Along these lines, there are a couple of other things you should do, and which I did: set my browsers to stop tracking my passwords, and never let them save another one of my passwords. (But be aware that your ability to log in to a site is more secure if a site ue a cookie, called a token, to do so; that doesn't include a plain-text stored password. When a website asks me if I want to log in automatically, with checkbox in the login form, I say yes; but when a browser asks if I want it to remember my password, the answer is always no. Finally, one of the ways Facebook, LinkedIn, et al. insinuate themselves into our cyber-lives is by giving us an easy way to log in to other sites. But that makes it easier for them to track us everywhere. Well, if you install a decent password manager, then you don't have to depend on social login services (based on the OAuth standard). Just skip them and use the omnipresent "log in with email" option every time. (I haven't encountered a website that absolutely requires social media logins yet.) Your password manager will make it about as easy to log in as social media services did.
  4. Stop using gmail. (Done.) This was harder, and figuring out and executing the logistics of it was a chore—it involved changing all the accounts, especially the important accounts, that use my gmail address. I had wanted to do this for a while, but the sheer number of hours it was going to take to make the necessary changes was daunting (and I was right: it did take a quite a few hours altogether). But I was totally committed to taking this step, so I did. Another reason is that I figured that I could get a single email address for the rest of my life. So my new email address resides at sanger.io, a domain (with personalized email addresses) that my family will be able to use potentially for generations to come. Here's how I chose an email hosting service to replace Gmail. And here's how I set up private email hosting for my family.
  5. Stop using iCloud to sync your iPhone data with your desktop and laptop data; replace it with wi-fi sync. (Done.) If you must use a smartphone, and if (like mine) it's an iPhone, then at least stop putting all your precious data on Apple servers, i.e., on iCloud. It's very easy to get started. After you do that, you can go tell iTunes to sync your contacts, calendars, and other information via wi-fi; here's how. And I'm sorry to break it to you, but Apple really ain't all that. By the way, a few months after writing the above, I looked more carefully at the settings area of my iPhone for data stored in iCloud; it turns out I had to delete each category of data one at a time, and I hadn't done that yet. They don't make it easy to turn off completely, but I think I have now.
  6. Subscribe to a VPN. (Done.) This sounds highly difficult and technical on first glance, maybe, but in fact it's one of the easiest things you can do. I set mine up in minutes; the thing that took a few hours was researching which one to get. But why a VPN? Well, websites can still get quite a bit of info about you from your IP address and your ISP (or governments that request the data) can listen in on any data that happens to be unencrypted via your web connection. VPNs solve those problems by making your connection to the Internet anonymous. One problem with VPNs is that they slightly slow down your Internet connection; in my experience so far, it's rarely enough to make a diference. They also add a little new complexity to your life, and it is possible that the VPN companies are misrepresenting what they do with your data (some of the claims of some VPNs have been tested, though). But it's a great step to take if you're serious about privacy, if you don't mind the slight hit to your connection speed. A nice fallback is the built-in private windows in Brave that are run on the Tor network, which operates on a somewhat similar principle to VPNs.
  7. Get identity theft protection. (Done.) After my phone was hacked, I finally did something I've been meaning to do for a long time—subscribe to an identity theft protection service. The one I use is LifeLock, and so far it seems to be quite good. If you don't know or care about identity theft, that's probably because you've never seen weird charges pop up on your card, or had your card frozen by your bank, or whatever. LifeLock doesn't prevent these issues by itself, but it does make it a lot easier to deal with them if they happen.
  8. Switch to Linux. (Done.) I used a Linux (Ubuntu) virtual machine for programming for a while. Linux is stable and usable for most purposes. It still has very minor usability issues for beginners. If you're up to speed, in which case, it's simply better than Windows or Mac, period, in almost every way. On balance the "beginner" issues aren't nearly as severe as those associated with using products by Microsoft and Apple. I've put Ubuntu on a partition on my workstation, and switched to that as my main work environment. I also gave away my Mac laptop and got a new laptop, on which I did a clean install, also of Ubuntu. Linux is generally more secure, gives the user more control, and most importantly does not have a giant multinational corporation behind it that wants to take and sell your information. Read more about how I switched to Ubuntu on my desktop and also my laptop.
  9. Quit social media, or at least nail down a sensible social media use policy. (Done.) I'm extremely ambivalent about my ongoing use of social media. I took a break for over a month (which was nice), but I decided that it is too important for my career to be plugged in to the most common networks. If I'm going to use them, I feel like I need to create a set of rules for myself to follow—so I don't get sucked back in. I also want to reconsider how I might use alternative social networks, like Gab (which has problems), and social media tools that make it easy both to post and to keep an easily-accessible archive of my posts. One of my biggest problems with all social media networks is that they make it extremely difficult to download and control your own friggin' data—how dare they. Well, there are tools to take care of that... Anyway, you can read more about how I settled on a social media use policy.
  10. Stop using public cloud storage. (Done.) "Now," you're going to tell me, "you're getting unreasonable. This is out of hand. Not back up to Dropbox, iCloud, Google Drive, Box, or OneDrive? Not have the convenience of having the same files on all my machines equally available? Are you crazy?" I'm not crazy. You might not realize what is now possible without the big "public cloud" services. If you're serious about this privacy stuff and you really don't trust Big Tech anymore—I sure don't—then yeah. This is necessary too. One option is Resilio Sync, moving files between your devices via deeply encrypted networks (via a modified version of the BitTorrent protocol), with the files never landing anywhere but on your devices. Another option is to use a NAS (network attached storage device), which is basically your very own always-on cloud server that only you can access, but you can access it from anywhere via an encrypted Internet connection. There are also open source Dropbox competitors that do use the cloud (the term to search for is "zero-knowledge encryption"), but which are arguably more secure; at any rate, you're in control of them. Yet another option is to run a cloud server from your desktop (if it's always on), using something like NextCloud. At first, I decided to go with Resilio Sync. Then I changed my mind, because it was a pain to be able to sync only when both devices are on, so I took the plunge and got a NAS after all. It took quite a while both to deliberate on what type of solution to go with (after Resilio), and to choose a specific NAS. It took quite a few hours altogether, but it turns out to be so useful. If you want to consider this more, check out my explanation of why they're such a good idea.
  11. Nail down a backup plan. (Done.) If you're going to avoid using so much centralized and cloud software, you've got to think not just about security but about backing up your data. I used to use a monster of a backup drive, but I wasn't even doing regularly-scheduled backups. In the end what I did was, again, to install a NAS. This provides storage space, making a complete backup of everything on my desktop (and a subset of files I put on laptop) and on the other computers in the house (that need backing up; perhaps not all of them do). It also keeps files instantly backed up a la Dropbox (see next item). But even this isn't good enough. If you really want protection against fire and theft, you must have an off-site backup. For that, I decided to bite the bullet and go with a relatively simple zero-knowledge encryption service, iDrive, that works nicely with my NAS system. It simply backs up the whole NAS. It bothers me that their software isn't open source (so I have to trust them that the code really does use zero-knowledge encryption), but I'm not sure what other reasonable solution I have, if I want off-site backup.
  12. Take control of my contact and friend lists. (Partly done.) I've been giving Google, Apple, and Microsoft too much authority to manage my contacts for me, and I've shared my Facebook and other friends lists too much. I'm not sure I want these contacts knowing my contacts and friends, period; the convenience and value I got out of sharing those lists was of very limited value to me, but evidently of great value to Big Tech. I don't know what they're doing with the information, or who they're sharing it with, really. Besides, if my friends play fast and loose with privacy settings, my privacy can suffer—and vice-versa. So I'm going to start maintaining my own contacts, thanks very much, and delete the lists I've given to Google and Microsoft. I'm glad I've already stopped putting this information on iCloud. The next step I need to do at present writing is to start using my NAS' built-in contacts server, which makes it possible to sync contact info across your devices using your own personal server. Then I'll permanently delete contact data from all corporate servers (as much as they generously let me do so).
  13. Stop using Google Calendar. (Done.) I just don't trust Google with this information, and frankly, Gcal isn't all that. I mean, it's OK. But they are clearly reading your calendar (using software, that is; that means the calendar data isn't encrypted on their servers, as it should be). So after I got my own NAS server, I was able to install a calendar server that could be accessed and synced from all of my devices. I had to transfer my data from Gcal to the server, which wasn't very hard. The hardest part was that I had to teach a colleague how to make appointments for me using the new system. Here are my notes on how I made the change.
  14. Study and make use of website/service/device privacy options. (In progress.) Google, Apple, Facebook, Twitter, YouTube, etc., all have privacy policies and options available to the user. It is time to study and regularly review them, and put shields up to maximum. Of course, it's better if I can switch to services that don't pose privacy threats; that's generally been my solution, but I have looked at quite a few privacy options and read privacy policies in order to do my due diligence about how my information is being used.
  15. Also study the privacy of other categories of data. Banking data, health data, travel data (via Google, Apple, Uber, Yelp, etc.), shopping data (Amazon, etc.)—it all has unique vulnerabilities that is important to be aware of. I'm not sure I've done all I can to lock it down. So I want to do that, even if (as seems very probably) I can't lock it all down satisfactorily, yet.
  16. Figure out how to change my passwords regularly, maybe. (Not started.) I might want to make a list of all my important passwords and change them quarterly everywhere, as a sort of cyber-hygiene. Why don't we make a practice of this? Because it's a pain in the ass and most people don't know how to use password management software, that's why. Besides, security experts actually discourage regular password changing, but that's mainly because most people are bad at making and tracking secure passwords. Well, if you use password managers, that part isn't so hard. But it's also because we really don't have a realistic plan to do it; maybe the main thing to do is to regularly change a few important passwords every so often, not all of them. I'll figure that out.
  17. Consider using PGP, the old encryption protocol (or an updated version, like GNU Privacy Guard) with work colleagues and family who are into it. (Not started.) Think about this: when your email makes the transit from your device to its recipient's device, it passes through quite a few other machines. Hackers have ways of viewing your mail at different points on its journey. Theoretically, they could even change it, and you (and its recipient) would be none the wiser. Now, don't freak out, and don't get me wrong; I'm not saying email (assuming the servers in between you and your recipients use the standard TLS, or Transport Layer Security, protocol) isn't perfectly useful for everyday purposes. But if you're doing anything reallyimportant and sensitive, either don't use email or use a higher encryption standard, because basic email is insecure. Now, I'm aware that some think PGP is outmoded or too complex (that's why I never got into it, to be honest), but the general idea of encrypting your email more strongly isn't going out of style, and improvements on the PGP protocol are still actively maintained. Still, when information security might matter quite a bit, then it might be easier to do what I'm doing now with my boys: using a chat tool with end-to-end encryption built in.
  18. Moar privacy thangs. Look into various other things one can do to lock down privacy. Consider the new Purism Librem 5 phone. Look into a physical security key for laptop and desktop. Encrypt my hard drives. Encrypt the drives on the NAS. Etc., etc.

What have I left out?

Are you going to join me in this push toward greater privacy and autonomy? Let me know—or, of course, you can keep it to yourself.


Social media stupidifies and radicalizes us

Back when the buzzword switched from "Web 2.0" to "social media," I started to get quite suspicious. When I was participating in online communities, I wasn't propagating "media." That is something that boring corporate media types did.

What would those boring corporate media types, or rather their Silicon Valley equivalents, do with once-unconstrained, lively, frequently long-form debate communities? Make the conversations shorter, more vapid, more appealing to the masses, and more addictive. In short, more of a really dumb waste of time.

The Zucks and Dorseys of the world did this in order to hook people more and more. What they probably didn't realize at first is that they had built tools for stupidification and radicalization. I don't think "dumb down" is quite the right phrase: dumbing down means making something complex simpler, easier to understand, but also less accurate. To "stupidify" focuses on the effects on us; in social media mobs, we are truly stupid herd animals, and when enraged, rather frighteningly stupid mobs. What we are fed and say is dumbed down; consequently, we are stupidified.

That degraded quality of social relationship--that is these fools' legacy. I have no respect for what Mark Zuckerberg and Jack Dorsey achieved. (This isn't a personal slam; I don't have that much respect for Wikipedia, either, which is something I built.)

If you had set out to reduce human Internet interactions to a subhuman, irrational, emotional level, an excellent strategy would be to replace long mailing list and Usenet newsgroup posts and rambling blog posts like this one with tweets (whether 140 or 280 characters--at that tiny length, it doesn't matter), propaganda memes, and emotion-driven comments that are cut short and sent by default if you try to write more than one paragraph.

To make the medium of social interaction briefer and more visual is to convey that intelligence, which is almost always long-form, is not valued. We live in a tl;dr world, the world that Zuck and Jack built. They must be very proud. If Marshall McLuhan was right that the medium is the message, social media's message is that your intelligence and individuality are worth little; your emotions and loyalty to your tribe are everything.

I will go farther than that. I lay the ongoing destruction of democratic institutions squarely at their feet. That's a dramatic and indeed emotional-sounding claim, but just look at what has happened and what is going on right now. It's a disaster. We increasingly distrust our institutions insofar as they are co-governed by our ideological opponents. That didn't used to be the case; what changed? That we are constantly presented with idiotic and easily-refuted versions of our opponents' social and political views. Consequently, we have lost all respect for each other. Staggering percentages of the American people want to split up the country and predict civil war. Long-term friendships and even family relationships have been broken up by relentlessly stupid arguments on social media.

It isn't just that increased familiarity with, or constant exposure to, our opponents' points of view has led to mutual contempt. Sure, familiarity might breed contempt; but through social media we do not project our most genuine, nuanced, intelligent, sensitive, and human selves. Social media makes us, rather, into partisan, tribal drones. We are not really more familiar with each other. We are familiar with stupidified versions of each other. And that is making society insane.

It certainly looks as if the combination of short, visual messages and simplified reactions to them--"hearting," upvoting and downvoting, or choosing from an extremely limited menu of emotional reactions--is enough to dumb down, to stupidify, the versions of ourselves we portray to each other. And that is, again, wreaking havoc on our society. With social media absolutely dominant as the locus of modern socialization, how could this fail to have a profound impact on our broader societal and political mood?

It is Zuck's and Dorsey's fault. They built the medium. The medium stupidifies us. Stupid people are particularly bad at democracy, as our Founding Fathers knew. The leadership of republican institutions must be wisely chosen by a sober citizenry using good sense improved by education. What we have now, thanks to social media, is a citizenry made punch-drunk by meaningless but addictive endorphins awarded them by reinforcing their tribal alliances, stupidly incapable of trusting "the Other" and, therefore, of reaching anything like a reasonable, democratic consensus.

This is one of the main reasons why I quit social media cold turkey over a month ago. I don't miss or regret it. I will continue to use it only for work purposes, i.e., essentially for advertising, which I hope is a reasonable use for it.

I sincerely, fervently hope that in five or ten years' time this is the conventional wisdom about social media. What comes next, I don't know. But we can't survive as a democratic society under these conditions.


I'm quitting social media cold turkey

"Yet another public resolution to leave Facebook or Twitter," you say with a laugh. "Only soon to be given up like so many others, no doubt." That's a reasonable reaction. But go ahead, check up on me: here are my Twitter account and my Facebook account. My last posts were Sept. 11 and Sept. 12. I promise to leave this blog post up forever--that'll shame me if I get back to it.

I've critiqued social media philosophically and even threatened to abandon it before, and I've advised people not to use it during work time (I admit I've later completely ignored this advice myself). But I've never really quit social media for any length of time.

Until now. As of earlier today, I've quit cold turkey. I've made my last posts on Twitter and Facebook, period. I'm not even going to say goodbye or explain or link to this blog post on social media, which I'll let others link to (or not). Friends and family will have to either call or email me or make their way here to get an explanation. I'll be happy to explain further and maybe engage in some debate in the comment section below.

I thought I'd explain what has led to this decision. You'll probably think it's my sniffy political stance against social media's threats to free speech and privacy, but you'd be wrong--although I'm glad I'll no longer be supporting these arrogant, vicious companies.

This resolution didn't really start as a reaction to social media at all. It began as a realization about my failings and about some important principles of ethics and psychology.

1. Socrates was right: we're not weak, we just undervalue rationality.

We are a remarkably irrational species.

Recently I began giving thought to the fact that we so rarely think long-term. If we were driven by the balance of long-term consequences, there are so many things we would do differently. If you think about this long enough, you can get quite depressed about your life and society. Perhaps I should only speak for myself--this is true of me, for sure--but I think it is a common human failing. Not exercising, overeating, wasting time in various ways, indulging in harmful addictions, allowing ourselves to believe all sorts of absurd things without thinking, following an obviously irrational crowd--man might be the rational animal, as Aristotle thought, but that doesn't stop him from also being a profoundly irrational animal.

I'm not going to share my admittedly half-baked thoughts on rationality in too much detail. You might expect me to, since I'm a Ph.D. philosopher who was once a specialist in epistemology, who has spent a great deal of time thinking about the ethical requirements of practical rationality, and who has done some training and reading in psychology. I'm not going to pretend that my thoughts on these things are more sophisticated than yours; I know they're probably not. I'm not an expert.

I will say this, just to explain where my head is at these days. I have always taken Socrates' theory of weakness of will (akrasia) very seriously. He thought that if we do something that we believe we shouldn't--have an extra cookie or a third glass of wine, say--then the problem is not precisely that our will is weak. No, he said, the problem is that we are actually ignorant of what is good, at least in this situation.

This sounds ridiculously wrong to most philosophers and students who encounter this view for the first time (and, for most of us, on repeated encounters). Of course there is such a thing as weakness of will. Of course we sometimes do things that we know are wrong. That's the human condition, after all.

But I can think of a sense in which Socrates was right. Let's suppose you have a rule that says, "No more than one cookie after dinner," and you end up eating two. Even as you bite into the second, you think, "I really shouldn't be eating this. I'm so weak!" How, we ask Socrates, do you lack knowledge that you shouldn't eat the second cookie? But there is a straightforward answer: you don't believe you shouldn't, and belief is necessary for knowledge. We can concede that you have some information or insight--but it is quite questionable whether, on a certain level, you actually believe that you shouldn't eat the cookie. I maintain that you don't believe it. You might say you believe it; but you're not being honest with yourself. You're not being sincere. The fact is that your rule just isn't important to you, not as important as that tasty second cookie. You don't really believe you shouldn't have it. In a certain sense, you actually think you should have it. You value the taste more than your principle.

From long experience--see if you agree with me here--I have believed that our desires carry with them certain assumptions, certain premises. New information can make our desires turn on a dime. I think there are a number false premises that generally underpin weakness of will. I'm not saying that, if we persuade ourselves that these premises are false, we will thereafter be wonderfully self-disciplined. I am saying, however, that certain false beliefs do make it much easier for us to discount sober, rational principles, naturally tuned to our long-term advantage, in favor of irrational indulgence that will hurt us in the long run.

Here, then, are two very general premises that underpin weakness of will.

(a) Sometimes, it's too strict and unreasonable to be guided by what are only apparently rational, long-term considerations.

There are many variations on this: being too persnickety about your principles means you're being a hard-ass, or uncool, or abnormal, or unsociable, or positively neurotic (surely the opposite of rational!). And that might be true--depending on your principles. But it is not true when it comes to eating healthy and exercising daily, for example: in the moment, it might seem too strict to stick by a reasonable diet, so it might seem unreasonable. But it really isn't unreasonable. It is merely difficult. It is absolutely reasonable because you'll benefit and be happier in the long run if you stick to your guns. It will get easier to do so with time, besides.

(b) Avoiding pain and seeking pleasure are, sometimes, simply better than being guided by rational, long-term considerations.

This is reflected, at least somewhat, in the enduring popularity of hedonism, ethical and otherwise. The aesthete who takes the third glass of wine doesn't want narrow principles to stand in the way of pleasure (it's such good wine! I don't want to be a buzzkill to my awesome friends!); instead, he will also congratulate himself on his nuance and openness to experience. The same sort of thinking is used to justify infidelity.

Such considerations are why I think it is plausible to say that, no, indeed, in our moments of weakness, we have actually abandoned our decent principles for cynical ones. You might object, "But surely not. I'm merely rationalizing. I don't really take such stuff seriously; I take my principles seriously. I know I'm doing wrong. I'm just being weak."

Well, maybe that's right. But it's also quite reasonable to think that, at least in that moment, you actually are quite deliberately and sincerely choosing the path of the cool, of the sociable friend, of the aesthete; you are shrugging with a self-deprecating smile as you admit to yourself that, yes, your more decent principles are not all that. You might even congratulate yourself on being a complex, subtle mensch, and not an unyielding, unemotional robot. This is why, frankly, it strikes me as more plausible that you're not merely rationalizing: you are, at least temporarily, embracing different (less rational, more cynical) principles.

But as it turns out, there are good reasons to reject (a) and (b). Recently, I was talking myself out of them, or trying to, anyway. I told myself this:

Consider (a) again, that sometimes, rationality is too strict. When we avoid strict rationality, the things we allow ourselves are frequently insipid and spoiled by the fact that they are, after all, the wrong things to do. Take staying up late: it's so greatly overrated. Overindulgence in general is a great example. Playing a game and watching another episode of a television program are simply not very rewarding; just think of the more gainful ways you could be spending your time instead. Having one cookie too many is hardly an orgasmic experience, and it is absolutely foolish, considering that the consequences of breaking a necessary diet can be so unpleasant.

Indeed, most Americans need to be on a diet (or to exercise a lot more), and that is an excellent example of our inability to think long term. It is hard to imagine the advantages of being healthy and thin. But those advantages are very real. They can spell the difference of years of a longer life, and considerably greater activity and, indeed, comfort in life. That is only one example of the advantages of rationality. The simple but profoundly beneficial activity of going to bed early enough and getting up early enough can make you much more alert, active, happy, and healthy. Why do so many people not do that every night? I think the reason is, at least in part, that we literally cannot imagine—not without help or creative effort—what that better life would be like. We are stuck in our own moment, and it seems all right to us.

In short, the requirements of a rational human life seem unreasonably "strict" only because we lack the imagination to consider a better sort of life.

Consider (b) now. Pain, and especially discomfort, are not all that awful. They are an important part of life, and if you attempt to avoid all pain, you ultimately invite even more. There is nothing particularly degrading about discomfort. Especially if it is unavoidable, and if working or fighting or playing through it results in some great achievement, then doing so can even be heroic. I’m not meaning to suggest that pain for its own sake is somehow desirable. It isn’t, of course. But being able to put up with discomfort in order to achieve something worthwhile is part of the virtue of courage.

2. It is irrational to use social media.

I want to be fair. So if I'm going to examine whether indulgence in social media is rational or not, I'll begin with some purported advantages and see how solid they are.

Social media seems to benefit the careers of a few people. This seems true of people with a lot of followers; but my guess is that most people with a lot of followers already have successful careers, which is why they have a lot of followers. (Models on Instagram and popular video makers on YouTube might be an exception, in that they can make their career via the platform itself.) People with fewer than, say, 10,000 Twitter followers don't really reach enough people to have a very interesting platform. I have about 3,000 Twitter followers, and I've deliberately kept my Facebook numbers smaller just because I use Facebook in a more personal way. Frankly, my career doesn't seem to be helped all that much by my presence on social media. Besides, that's not why I do it.

My Everipedia colleagues might be a little upset with me that I won't be sharing Everipedia stuff on Twitter and Facebook anymore (which I won't--because I know that even that little bit would pull me back in). But I can assure them that I'll get more substantive and impactful work done as a result of all the time freed up from social media. I will continue to use communication platforms like Telegram and Messenger, by the way, and Reddit, in the Everipedia group, will also be OK. I'll also keep using LinkedIn to connect to people for work purposes. But Quora and Medium are out. Those are too much like blogging anyway. My time is better spent writing here on this blog, or for publication, if I'm going to do long-form writing.

Social media also seems to be a way for us to make a political impact. We can talk back against our political opponents. We can share propaganda for our side. Now this, I was surprised to learn, does seem to have some effect in my case. I've heard from one person that she actually became a libertarian mostly because of my posts on Facebook. (I could hardly believe it.) Others say they love my posts, and I think I do probably move the needle some miniscule distance in the direction of Truth and Goodness. But I'm only writing to a few hundred people on Facebook, at most. My reach on Twitter is larger, but I almost certainly do not persuade anyone 280 characters at a time.

This isn't to say that, in the aggregate, social media doesn't have a great deal of impact on society. It clearly does. But I think its total impact is negative, not positive. Perhaps the way I use it is positive, although I doubt it. I am more given to long-form comments than most people on Facebook and Twitter. I like to think that my comments model good reasoning and other intellectual virtues. But are they my best? Hell no. Does my influence matter, on the whole? Of course not. I am participating in a system that does, on my account and on most people's, lower the level of discourse.

On balance, I'm not proud of the political impact of my social media participation. I don't think many of us, if any, have the right to be proud of theirs.

Social media is kind of fun. Sure, it's fun to butt heads with clueless adversaries and get an endorphin boost from likes and other evidence of public visibility. But political debate is more frustrating than interesting, and the endorphin boosts are meaningless artifacts of how the system is designed. Nobody really thinks otherwise, and yet we do it anyway. It's pathetically, absurdly irrational.

Facebook keeps me in touch with my friends and family. Admittedly, there is very little downside to this one. I frankly love hearing from old high school friends that otherwise I might not hear from for years. Facebook keeps me a little closer to my extended family. That's a great thing. A common response to this is that the quality of our interactions is much worse than it would have been otherwise. But if I'm going to be honest with myself, I just don't see this. I mean, Facebook lets me see remarks from my funny and nice old friends from high school, and I probably wouldn't talk to them at all if it weren't for Facebook (sorry, friends, but I think you understand! There isn't enough time in the day to keep up with all the friends I've ever made in my life!). There's no downside there. And no, I don't think it makes my relationship with my family any worse. I think it makes it a little better.

So what about the disadvantages of social media?

We are driven by algorithms. Facebook, Twitter, and the rest carefully design algorithms that highlight the posts our friends make to fit their purposes, which are not ours. The whole system has been designed by psychologists to hook us to participate as much as we can, which it frequently does.

Social media companies spy on us. And they make it easier for other companies, organizations, and (most concerning to me) potentially repressive governments to do so. And by participating, we endorse that behavior. That seems extremely irrational.

Social media companies have started to openly censor their political opponents. And again, if you participate, you're endorsing that behavior. Continuing to participate under those circumstances is irrational for conservatives and libertarians.

I sometimes get kind of addicted. I go through phases where I use social media a lot, and that can be a pretty awful waste of time, at least when I have many other things I should be doing. This is the main reason I think the right strategies are "cold turkey" and "you won't see me again"--like it or not. In short, I want to minimize temptation.

We indulge in petty debates that are beneath us. This bothers me. I don't like dignifying disgusting propaganda with a response, but I seem not to be able to restrain myself when I come across it in my feeds. Often, a proper response would require an essay; but I'd be writing an essay in response to an idiotic meme (say), which is kind of pathetic. I'd much rather have long-form debates on my blog (or between blogs that reply to each other, as we used to do).

It takes time away from more serious writing. I can write for publication. So why should I waste my time writing long Facebook posts that only a few people see? For things not quite worthy of publication, at least if I focus on my blog, I can write at a longer length and develop an argument more completely. Did you used to have a blog on which you had longer, better things to say?

So it's a waste of time, on balance. The opportunity cost is too high. I can and should be spending my time in better ways--work, programming study, helping to homeschool my boys, and doing more serious writing. That's the bottom line. Apart from keeping me in touch with family and friends on Facebook, the advantages of social media are pretty minimal, while the disadvantages are huge and growing.

Why don't I just limit my social media use to personal interactions with family and friends on Facebook, you ask? Because I don't want to take the risk of falling back into bad old habits. My friends can visit my blog and interact with me here, if they want. My family I'll call and visit every so often.

So I'm turning the page. I don't expect this to be big news for anybody. But it's going to change the way I interact online. If you want to keep seeing me online, start following my blog.

3. Can I really do this?

I suppose I've given a reasonably good analysis of why using social media is irrational. I've said similar things before, and many others have as well. And yet we keep using social media. Obviously, human beings are often not guided by rationality; much would be different in our crazy old world if we always were.

It is remarkable, though, just how much we acknowledge all the irrationalities about social media, and yet we indulge in it anyway. There's something deeply cynical about this. It can't be good for the soul.

The big question in my own mind is whether I will really be able to stay away from social media as I say I will. My use of social media is irrational, sure. But I don't pretend that the mere fact that  is, all by itself, enough to motivate me; indeed, I'm not sure who it's rational for, apart from the very few people who make a career out of it.

But I want to try. And as I said at the start of this post, it's not just about social media. It's about making my life more rational. So at the same time, I want to start eating more healthily and exercising more regularly, going to bed earlier, etc. Doing all that at once seems very ambitious. It might even seem silly and naive for me to say all this. But the insights I've reported on in part 1 above have really stuck in my mind, and they don't seem to be going away. So we'll see.


So I tried out Gab.ai

After the recent purges of Alex Jones and assorted conservatives and libertarians by Facebook, YouTube, Twitter, and others, I decided it really is time for me to learn more about other social networks that are more committed to free speech. I decided to try Gab.ai, hoping against hope that it wouldn't prove to be quite as racist as it is reputed to be.

See, while I love freedom of speech and will strongly defend the right of free speech—sure, even of racists and Nazis, even of Antifa and Communists—I don't want to hang out in a community dominated by actual open racists and Nazis. How boring.

So I went to the website, and, well, Gab.ai certainly does have a lot of people who are at least pretending to be Nazis. I never would have guessed there were that many Nazis online.

To support my impression, I posted a poll:

 Are you OK with all the open racism and anti-Semitism on Gab.ai? 57% Yes. 37% I tolerate it. 6% Makes me want to leave.

Wow! 1,368 votes! I sure hit a nerve with Gab.ai. But the results, well, they were disappointing: 57% of self-selected poll answerers on the web poll said they were OK with open racism on Gab.ai, 37% tolerated it, and it made 6% of them want to leave. But I was told by several people that I should have added another option: "That's what the Mute button is for."

There's another reason I've spent this much time exploring the site. It's that I really doubt there are that many actual Nazis on the site. Consider for a moment:

  1. The Establishment is increasingly desperate to silence dissenting voices.
  2. Gab.ai and some other alternative media sites have been getting more popular.
  3. Silicon Valley executives know the fate of MySpace and Yahoo: it's possible for giants to be replaced. Users are fickle.
  4. Like progressives, most conservatives aren't actually racist, and they will be put off by communities dominated by open, in-your-face racists.
  5. There's a midterm election coming up and people spending untold millions to influence social media, since that, we are now told, is where it's at.

Considering all that, it stands to reason that lots of left-wing trolls are being paid (or happily volunteer; but no doubt many are paid) to flood Gab.ai and make appallingly racist, fascist, anti-Semitic accounts. Of course they are; it's an obvious strategy. The only question is how many—i.e., what percentage of the Gab.ai users—consist of such faux racists.

Such trolls aside, there are at least two broad categories of people on Gab.ai. In one category there are the bona fide racists, Nazis, anti-Semites, and other such miscreants, and in the other category there is everyone else—mostly conservatives, libertarians, and Trump voters who do things like share videos of (black conservative) Candace Owens and shill for Trump (I voted for Gary Johnson, and I've always been bored by political hackery). The latter category of user mutes those of the former category, apparently.

So, feeling desperate for an alternative to Twitter, I spent a few hours today on the site, mostly muting racists, and a bit of getting introduced to some people who assured me that most of the people on the site were decent and non-racist, and that what you had to do was—especially in the beginning—spend a lot of time doing just what I was doing, muting racists.

Boy, are there a lot of racists (or maybe faux racists) there to mute. I still haven't gotten to the end of them.

But I'm not giving up on Gab.ai, not yet. Maybe it'll change, or my experience will get better. A lot of people there assured me that it would. I love that it's as committed to free speech as it is, and I wouldn't want to censor all those racists and Nazis just as I wouldn't want to censor Antifa and Communists. Keep America weird, I say!

If it's not Gab.ai, I do think some other network will rise. Two others I need to spend more time on are Steemit.com, a blockchain blogging website, similar to Medium and closely associated with EOS and Block.one, and Mastodon.social, which is sort of a cross between Twitter and Facebook. Steemit has become pretty popular (more so than Gab.ai), while Mastodon has unfortunately been struggling. I also want to spend more time on BitChute, a growing and reasonably popular YouTube competitor.


I am informed; you are misinformed; and the government should do something about this problem

Poynter, the famous journalism thinktank, has published "A guide to anti-misinformation actions around the world." This sort of thing is mostly interesting not just for the particular facts it gathers but also for the assumptions and categories it takes for granted. The word "misinformation" is thrown around, as are "hate speech" and "fake news." The European Commission, it seems, published a report on "misinformation" (the report itself says "disinformation") in order to "help the European Union figure out what to do about fake news." Not only does this trade on a ridiculously broad definition of "disinformation," it assumes that disinformation is somehow a newly pronounced or important problem, that it is the role of a supernational body (the E.U.) to figure out what to do about this problem, and that it is also the role of that body to "do something." Mind you, there might be some government "actions" that strike me as being possibly defensible; but the majority that I reviewed looked awful.

For example, look at what Italy has done:

A little more than a month before the general election, the Italian government announced Jan. 18 that it had set up an online portal where citizens could report fake news to the police.

The service, which prompts users for their email addresses, a link to the fake news and any social media networks they saw it on, ferries reports to the Polizia Postale, a unit of the state police that investigates cyber crime. The department will fact-check them and — if laws were broken — pursue legal action. At the very least, the service will draw upon official sources to deny false or misleading information.

That plan came amid a national frenzy over fake news leading into the March 4 election and suffered from the same vagueness as the ones in Brazil, Croatia and France: a lacking definition of what constitutes "fake news."

Poynter, which I think it's safe to say is an Establishment thinktank, mostly just dutifully reports on these developments. In their introduction, they do eventually (in the fourth paragraph) get around to pointing out some minor problems with these government efforts: the difficulty of defining "fake news" and, of course, that pesky free speech thing.

That different countries are suddenly engaging in press censorship is only part of the news. The other part is that Poynter, representing the journalistic Establishment, apparently does not find it greatly alarming about "governments" that are "taking action." Well, I do. Just consider the EU report's definition of "disinformation":

Disinformation as defined in this Report includes all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit.

This implies that if in the opinion of some government authority, some claim is merely false and, like most professional publishing operations it is published for profit, then it counts as disinformation. This means that (with an exception made for non-profit publishers, apparently) the E.U. considers anything false to be an item of disinformation, and thus presumably ripe for some sort of regulation or sanction.

Well, of course this sounds ridiculous, but I am just reading. It's not my fault if that's what the report says. I mean, I'm sorry, but it certainly does look as if the E.U. wants to determine what's false and to then to ban it (or something). Of course, the definition does first say that disinformation is designed to intentionally cause public harm, but anybody who reads legalistic texts needs to bear in mind that, as far as the law is concerned, the parts that come after "or" and "and" are just as important as the parts that come before. The text does say "or for profit." Is that because in the E.U., seeking profit is as suspect as intentionally causing public harm?

The difficulty about texts like this, aside from the fact that they are insufferably dull, is that they are so completely chock-full of bad writing, bad reasoning, false assumptions, and so forth, that it would take several volumes to say everything that needs saying about the E.U. report and Poynter's run-down of government actions. What about all the important issues associated with what looks like a worldwide crackdown on free speech? They have been solved, apparently.

Poynter at least has the good sense to acknowledge difficulties, as they do at the end of the discussion of Italy's regulatory scheme. The government positions are appalling, as if they were saying: "We know what fake news and disinformation and misinformation are, more or less. Sure, there's a small intellectual matter of defining them, but no big deal there. It's just a matter of deciding what needs to be done. Free speech, well, that's just another factor to be weighed."

Just imagine reading this page just twenty years ago. It would have been regarded as an implausible horrorshow. I imagine how someone might have responded to a glimpse 20 years into the future:

What are you saying--in 2018, countries all around the world will decide that it's time to start seriously cracking down on "misinformation" because it's too easy to publish false stuff online, and free speech and freedom of the press? That's ridiculous. It's one thing to get upset about "political incorrectness," but it's another thing altogether for the freedom-loving West, and especially for journalists (for crying out loud!) to so bemoan "hate speech" and "fake news" (really?) that they'll give up free speech and start calling on their governments to exert control. That's just...ridiculous. Do you think we'll forget everything we know about free speech and press freedom in 20 years?

Well, it would have been ridiculous in 1998. Twenty years later, it still should be, but apparently it isn't for so many sophisticated, morally enlightened leaders who can identify what is true and what is misinformation.

It's time to push back.


Is it time to move from social media to blogs?

This began as a Twitter thread.

I've finally put my finger on a thing that annoys me—probably, all of us—about social media. When we check in on our friends and colleagues and what they're sharing, we are constantly bombarded with simplistic attacks on our core beliefs, especially political beliefs. "This cannot stand," we say. So we respond. But it's impossible to respond in the brief and fast-paced media of Twitter and Facebook without being simplistic or glib. So the cycle of simplistic glibness never stops.

There are propagandists (and social media people...but I repeat myself) who love and thrive on this simplicity. Their messages are more plausible and easier to get upset about when stated simply and briefly. They love that. That's a feature, not a bug (they think)!

I feel like telling Tweeps and FB friends "Be more reasonable!" and "Use your brain!" and "Chill!" But again—everything seems sooooo important, because our core beliefs are under attack. How can most people be expected to be calm and reasonable? People who take high standards of politeness and methodology seriously naturally feel like quitting. But social media has become important for socializing, PR, career advancement, and (let's face it) the joy of partisanship. "I can't quit you!" we moan. But, to quote a different movie, this aggression will not stand, man. Our betters at Twitter and Facebook agree, and so they have decided to force the worst actors to play nice. But they can't be trusted to identify "the worst actors" fairly. They're choosing the winners.

What's the solution for those of us who care about truth, nuance, and decency—and free speech? I don't know, but I have an idea. Rather than letting Facebook and Twitter (and their creeping censorship) control things, I'm going to try putting content updates on my blog. I'll still use Twitter and Facebook for Everipedia announcements and talk, and I'll link to blog updates from both places. But you'll have to visit my blog to actually read my more personal content. Anyway, I'm going to give that a try.


The Well-Ordered Life

The well-ordered life may be defined as that set of sound beliefs and good practices which are most conducive to productivity and therefore happiness, at least insofar as as happiness depends on productivity.

The well-ordered life has several types of component: goals; projects, which naturally flow from goals, and which are essentially long-term plans; habits, or actions aimed at the goals and which one aims to do regularly; plans for the day or week; assessments, or evaluating the whole, or stock-taking; and, different from all of these, a set of beliefs and states of attention that support the whole.

Let me explain the general theory behind the claim that the beliefs and practices I have in mind do, in fact, conduce to productivity and thus happiness. There is a way to live, which many of us have practiced at least from time to time and which some people practice quite a lot, which has been variously described as “peak performance,” “getting things done,” “self-discipline,” or as I will put it, a “well-ordered life.”

This generally involves really accepting, really believing in, certain of what might be called life goals. If you do not believe in these goals, the whole thing breaks down. Next, flowing from these goals, you must embrace certain projects; the projects must be broad and long-term, meaning they incorporate many different activities but have a definite end point. These must be tractable and perfectly realistic, and again, you must be fully “on board” with the wisdom of these projects. Projects can include things like writing a paper, working through a tutorial, writing a large computer program, and much more, depending on your career.

This background—your global goals and your significant, big projects—is the backdrop for your daily life. If this backdrop is not well-ordered, then your daily life will fall apart. If you lose faith in your goals, little everyday activities will be hard to do. Similarly, if you decide that a certain project does not serve your goals, you will not be able to motivate yourself to take actions. So you must guard your commitment to your goals and projects jealously, and if it starts to get shaky, you need to reassess as soon as possible.

Your daily life is structured by three main things: habits, plans, and assessments. Your habits are like the structure of your day. They can and probably should include a schedule and are regular activities that move you toward the completion of a project. Plans are like the content of your day. The habits and schedule might give you an outline, but you still need to think through how to flesh out the outline. Finally, there are assessments, which can be done at the same time as plans are done, but which involve evaluating your past performance, introspecting about how you feel about everything, and frankly squashing irrational thoughts that are getting in the way.

Such a life is well-ordered because projects flow from goals, while habits, plans, and assessments are all in service of the projects and, through them, the goals. It is a system with different parts; but the parts all take place in your life, meaning they at bottom take the form of beliefs and actions that you strongly identify with and that actually make up who you are.

This then leads to the last element of the well-ordered life: beliefs that support the whole. As we move through life, we are not in direct control of our beliefs or even most of our actions. We find ourselves believing things or with attitudes that we do not wish or that even surprise or dismay us. These beliefs can greatly support a well-ordered life, but they can also undermine it entirely. If you believe a goal is entirely unattainable, or a project undoable, you will probably lack the motivation needed to pursue it.

This is why assessment, or stock-taking, is so important if you are to maintain a well-ordered life, especially if you tend to be depressed or nervous or your self-confidence is low. You need to explore and, as it were, tidy up your mind.

You should expel any notion that self-discipline is a matter of luck, as if some people have it and others don’t. It is also an error to think self-discipline is a matter of remembering some brilliant insight you or someone else had, or staying in the right frame of mind. Indeed, self-discipline is not any one thing at all. It is, as I said, a system, with various working parts.

It is true that some people just rather naturally fall into the good habits and beliefs that constitute the well-ordered life. But the vast majority of us do not. The better you understand these parts, bear them in mind, and work on them until the whole thing is a finely-tuned machine, the more control you’ll have over your life. This is not easy and, like any complex system, a lot can go wrong. That’s why it’s so necessary to take stock and plan.


How to crowdsource videos via a shared video channel

I got to talking to one of my colleagues here at Everipedia, the encyclopedia of everything, where I am now CIO, about future plans. I had the following idea.

We could create an Everipedia channel--basically, just a YouTube account, but owned by Everipedia and devoted to regularly posting new videos.

We could invite people to submit videos to us; if they're approved, we put branding elements on them and post them. We share some significant amount of the monetization (most of it) with the creator.

We also feature the videos at the top of the Everipedia article about the topic.

Who knows what could happen, but what I  hope would happen is that we'd get a bunch of subscribers, because of all the connections of the video makers (and Everipedia--we collectively have a lot of followers and a lot of traffic). And the more people we got involved, the greater the competition and the better the videos would be.

There are still huge opportunities in the educational video space--so many topics out there simply have no good free videos available.

Others must have organized group channels like this before, but I can't think of who.

What do you think?


Could God have evolved?

1. How a common argument for the existence of God failed—or did it?

As a philosophy instructor, I often taught the topic of arguments for the existence of God. One of the most common arguments, called the argument from design or teleological argument, in one formulation compares God to a watchmaker.

If you were walking along a beach and found some complex machine that certainly appeared to be designed by someone, which did something amazing, then you'd conclude that it had a maker. But here we are in a universe that exhibits far more complexity and design than any machine we've ever devised. Therefore, the universe has a maker as well; we call it God.

This is sometimes called the Watchmaker Argument—since the mechanism our beachcomber finds is usually a watch—and is attributed to William Paley. Variations on this theme could be the single most commonly-advanced argument for God.

The reason the Watchmaker Argument doesn't persuade a lot of philosophers—and quite a few scientists and atheists generally—is that all the purported signs of design can be found in the biological world, and if biological complexity and appearance of design can be explained by natural selection, then God is no longer needed as an explanatory tool.

Some skeptics go a bit further and say that all the minds we have experience of are woefully inadequate for purposes of designing the complexity of life. Therefore, not only are natural mechanisms another explanation, they are a much better explanation, as far as our own experience of minds and designing is concerned.

But here I find myself skeptical of these particular skeptics.

2. Modern technology looks like magic

Recently, probably because I've been studying programming and am understanding the innards of technology better than ever, it has occurred to me very vividly that we may not be able to properly plumb the depths of what minds are capable of achieving. After all, imagine what a medieval peasant would make of modern technology. As lovers of technology often say, it would look like magic, and we would look like gods.

We've been working at this scientific innovation thing for only a few centuries, and we've been aggressively and intelligently innovating technology for maybe one century. Things we do now in 2017 are well into the realm of science fiction of 1917. We literally cannot imagine what scientific discovery and technological innovation will make available to us after 500 or 1000 years. Now let's suppose there are advanced civilizations in the galaxy that have been around for a million years.

Isn't it now hackneyed to observe that life on Earth could be a failed project of some super-advanced alien schoolchild? After all, we already are experimenting with genetic engineering, a field that is ridiculously young. As we unlock the secrets of life, who's to say we will not be able to engineer entirely different types of life, every bit as complex as the life we find on Earth, and to merge with our inventions?

Now, what havoc should these reflections wreak on our religious philosophy?

3. Could an evolved superbeing satisfy the requirements of our religions?

The scientific atheist holds the physical universe in great reverence, as something that exists in its full complexity far beyond the comprehension of human beings. The notion of a primitive "jealous God" of primitive religions is thought laughable, in the face of the immense complexity of the universe that this God is supposed to have created. Our brains are just so much meat, limited and fallible. The notion that anything like us might have created the universe is ridiculous.

Yet it is in observing the development of science and technology, thinking about how we ourselves might be enhanced by that science and technology, that we might come to an opposite conclusion. Perhaps the God of nomadic tent-dwellers couldn't design the universe. But what if there is some alien race that has evolved past where we are now for millions of years. Imagine that there is a billion-year-old superbeing. Is such a being possible? Consider the invention, computability, genetic engineering, and technological marvels we're witnessing today. Many sober heads think the advent of AI may usher in the Singularity within a few decades. What happens a millions years after that? Could the being or beings that evolve create moons? Planets? Suns? Galaxies? Universes?

And why couldn't such a superbeing turn out to be the God of the nomadic tent-dwellers?

Atheists are wrong to dismiss the divine if they do so on grounds that no gods are sufficiently complex to create everything we see around us. They believe in evolution and they see technology evolving all around us. Couldn't god-like beings have evolved elsewhere and gotten here? Could we, after sufficient time, evolve into god-like beings ourselves?

What if it turns out that the advent of the Singularity has the effect of joining us all to the Godhead that is as much technological as it is physical and spiritual? And suppose that's what, in reality, satisfies the ancient Hebrew notions of armageddon and heaven, and the Buddhist notion of nirvana. And suppose that, when that time comes, it is the humble, faithful, just, generous, self-denying, courageous, righteous, respectful, and kind people that are accepted into this union, while the others are not.

4. But I'm still an agnostic

These wild speculations aren't enough to make me any less of an agnostic. I still don't see evidence that God exists, or that the traditional (e.g., Thomistic) conception of God is even coherent or comprehensible. For all we know, the universe is self-existing and life on Earth evolved, and that's all the explanation we should ever expect for anything.

But these considerations do make me much more impressed by the fact that we do not understand how various minds in the universe might evolve, or might have evolved, and how they might have already interacted with the universe we know. There are facts about these matters about which we are ignorant, and the scientific approach is to withhold judgment about them until the data are in.


On intellectual honesty and accepting the humiliation of error

I. The virtue of intellectual honesty.
Honesty is a greatly underrated epistemic virtue.

There is a sound reason for thinking so. It turns out that probably the single greatest source of error is not ignorance but arrogance, not lack of facts but dogmatism. We leap to conclusions that fit with our preconceptions without testing them. Even when we are more circumspect, we frequently rule out views that turn out to be correct because of our biases. Often we take the easy way out and simply accept whatever our friends, religion, or party says is true.

These are natural habits, but there is a solution: intellectual honesty. At root, this means deep commitment to truth over our own current opinion, whatever it might be. That means accepting clear and incontrovertible evidence as a serious constraint on our reasoning. It means refusing to accept inconsistencies in one's thinking. It means rejecting complexity for its own sake, whereby we congratulate ourselves for our cleverness but rarely do justice to the full body of evidence. It means following the evidence where it leads.

The irony is that some other epistemic virtues actually militate against wisdom, or the difficult search for truth.

Intelligence or cleverness, while in themselves an obvious benefit, become a positive hindrance when we become unduly impressed with ourselves and the cleverness of our theories. This is perhaps the single biggest reason I became disappointed with philosophy and left academe; philosophers are far too impressed with complex and clever reasoning, paying no attention to fundamentals. As a result, anyone who works from fundamentals finds it to be child's play (I thought I did, as a grad student) to poke holes in fashionable theories. This is not because I was more clever than those theoreticians but because they simply did not care about certain constraints that I thought were obvious. And it's easy for them in turn to glibly defend their views; so it's a game, and to me it became a very tiresome one.

Another overrated virtue is, for lack of a better name, conventionality. In every society, every group, there is a shared set of beliefs, some of which are true and some of which are false. I find that in both political and academic discussions, following these conventions is held to be a sign of good sense and probity, while flouting them ranges from suspect to silly to evil. But there has never yet been any group of people with a monopoly on truth, and the inherent difficulty of everything we think about means that we are unlikely to find any such group anytime soon. I think most of my liberal friends are—perhaps ironically—quite conventional in how they think about political issues. Obviously conservatives and others can be as well.

Another virtue, vastly overrated today, is being "scientific." Of course, science is one of the greatest inventions of the modern mind, and it continues to produce amazing results. I am also myself deeply committed to the scientific method and empiricism in a broad sense. But it is an enormous mistake to think that the mere existence of a scientific consensus, especially in the soft sciences, means that one may simply accept what science instructs is true. The strength of a scientific theory is not determined by a poll but by the quality of evidence. Yet the history of science is the history of dogmatic groups of scientists having their confidently-held views corrected or entirely replaced. The problem is a social one; scientists want the respect of their peers and as a result are subject to groupthink. In an age of scientism this problem bleeds into the general nonscientific population, with dogmatists attempting to support their views by epistemically unquestionable (but often badly-constructed and inadequate) "studies"; rejecting anyone's argument, regardless how strong, if it is not presented with "scientific support"; and dismissing any non-scientist opining on a subject about which a scientist happens to have some opinion. As wonderful as science is, the fact is that we are far more ignorant than we are knowledgeable, even today, in 2017, and we would do well to remember that.

Here's another overrated virtue: incisiveness. Someone is incisive if he produces trenchant replies that allows his friends to laugh at the victims of his wit. Sometimes, balloons need to be punctured and there is nothing there when deflated—of course. But problems arise when glib wits attack some more complex theories and narratives. It is easy to tear down and hard to build. Fundamentally my issue is that we need to probe theories and narratives that are deeply rooted in facts and evidence, and simply throwing them on the scrap heap in ridicule means we do not fully learn what we can from the author's perspective. In philosophy, I'm often inclined to a kind of syncretistic approach which tips its hat to various competing theories that each seem to have their hands on different parts of the elephant. Even in politics, even if we have some very specific policy recommendation, much has been lost if we simply reject everything the other side says in the rough and tumble of debate.

I could go on, but I want to draw a conclusion here. When we debate and publish with a view to arriving at some well-established conclusions, we are as much performing for others as we are following anything remotely resembling an honest method for seeking the truth. We, with the enthusiastic support of our peers, are sometimes encouraged to think that we have the truth when we are still very far indeed from having demonstrated it. By contrast, sometimes we are shamed for considering certain things that we should feel entirely free to explore, because they do contain part of the truth. These social effects get in the way of the most efficient and genuine truth-seeking. The approach that can be contrasted with all of these problems is intellectual honesty. This entails, or requires, courageous individualism, humility, integrity, and faith or commitment to the cause of truth above ideology.

It's sad that it is so rare.

 

II. The dangers of avoiding humiliation.

The problem with most people laboring under error (I almost said "stupid people," but many of the people I have in mind are in fact very bright) is that, when they finally realize that they were in error, they can't handle the shame of knowing that they were in error, especially if they held their beliefs with any degree of conviction. Many people find error to be deeply humiliating. Remember the last time you insisted that a word meant one thing and it meant something else, when you cited some misremembered statistic, or when thought you knew someone who turned out to be a stranger. It's no fun!

Hence we are strongly motivated to deny that we are, in fact, in error, which creates the necessity of various defenses. We overvalue supporting evidence ("Well, these studies say...") and undervalue disconfirming evidence ("Those studies must be flawed"). Sometimes we just make up evidence, convincing ourselves that we just somehow know things ("I have a hunch..."). We seek to discredit people who present them with disconfirming evidence, to avoid having to consider or respond to it ("Racist!").

In short, emotional and automatic processes lead us to avoid concluding that we are in error. Since we take conscious interest in defending our views, complex explanatory methods are deployed in the same effort. ("Faith is a virtue.") But these processes and methods, by which we defend our belief systems, militate in favor of further error and against accepting truth. ("Sure, maybe it sounds weird, but so does a lot of stuff in this field.") This is because propositions, whether true or false, tend to come in large clusters or systems that are mutually supporting. Like lies, if you support one, you find yourself committed to many more.

In this way, our desire to avoid the humiliation of error leads us into complex systems of confusion—and, occasionally, into patterns of thinking that can be called simply evil. ("The ends justify the means.") They're evil because the pride involved in supporting systematically wrong systems of thought drives people into patterns of defense go beyond the merely psychological and into the abusive, psychologically damaging, and physical. ("We can't tolerate the intolerant!" "Enemy of the people." "Let him be anathema.")

What makes things worse is that we are not unique atoms each confronting a nonhuman universe, when we are coming to grips with our error. We are members of like-minded communities. We take comfort that others share our beliefs. This spreads out the responsibility for the error. ("So-and-so is so smart, and he believes this.") It is much easier to believe provably false things if many others do as well, and if they are engaged in the same processes and methods in defending themselves and, by extension, their school of thought.

This is how we systematically fail to understand each other. ("Bigot!" "Idiot!") This is why some people want to censor other people. ("Hate speech." "Bad influence.") This is how wars start.

Maybe, just maybe, bad epistemology is an essential cause of bad politics.

(I might be wrong about that.)

It's better to just allow yourself to be humiliated, and go where the truth leads. This is the nature of skepticism.

This, by the way, is why I became a philosopher and why I commend philosophy to you. The mission of philosophy is—for me, and I perhaps too dogmatically assert that it ought to be the mission for others—to systematically dismantle our systems of belief so that we may begin from a firmer foundation and accept only true beliefs.

This was what Socrates and Descartes knew and taught so brilliantly. Begin with what you know on a very firm foundation, things that you can see for yourself ("I know that here is a hand"), things that nobody denies ("Humans live on the surface of the earth"). And as you make inferences, as you inevitably will and must, learn the canons of logic and method so that you can correctly apportion your strength of belief to the strength of the evidence.

There is no way to do all this without frequently practicing philosophy and frequently saying, "This might or might not support my views; I don't know." If you avoid the deeper questions, you are ipso facto being dogmatic and, therefore, subject to the patterns of error described above.