So I tried out Gab.ai

After the recent purges of Alex Jones and assorted conservatives and libertarians by Facebook, YouTube, Twitter, and others, I decided it really is time for me to learn more about other social networks that are more committed to free speech. I decided to try Gab.ai, hoping against hope that it wouldn't prove to be quite as racist as it is reputed to be.

See, while I love freedom of speech and will strongly defend the right of free speech—sure, even of racists and Nazis, even of Antifa and Communists—I don't want to hang out in a community dominated by actual open racists and Nazis. How boring.

So I went to the website, and, well, Gab.ai certainly does have a lot of people who are at least pretending to be Nazis. I never would have guessed there were that many Nazis online.

To support my impression, I posted a poll:

 Are you OK with all the open racism and anti-Semitism on Gab.ai? 57% Yes. 37% I tolerate it. 6% Makes me want to leave.

Wow! 1,368 votes! I sure hit a nerve with Gab.ai. But the results, well, they were disappointing: 57% of self-selected poll answerers on the web poll said they were OK with open racism on Gab.ai, 37% tolerated it, and it made 6% of them want to leave. But I was told by several people that I should have added another option: "That's what the Mute button is for."

There's another reason I've spent this much time exploring the site. It's that I really doubt there are that many actual Nazis on the site. Consider for a moment:

  1. The Establishment is increasingly desperate to silence dissenting voices.
  2. Gab.ai and some other alternative media sites have been getting more popular.
  3. Silicon Valley executives know the fate of MySpace and Yahoo: it's possible for giants to be replaced. Users are fickle.
  4. Like progressives, most conservatives aren't actually racist, and they will be put off by communities dominated by open, in-your-face racists.
  5. There's a midterm election coming up and people spending untold millions to influence social media, since that, we are now told, is where it's at.

Considering all that, it stands to reason that lots of left-wing trolls are being paid (or happily volunteer; but no doubt many are paid) to flood Gab.ai and make appallingly racist, fascist, anti-Semitic accounts. Of course they are; it's an obvious strategy. The only question is how many—i.e., what percentage of the Gab.ai users—consist of such faux racists.

Such trolls aside, there are at least two broad categories of people on Gab.ai. In one category there are the bona fide racists, Nazis, anti-Semites, and other such miscreants, and in the other category there is everyone else—mostly conservatives, libertarians, and Trump voters who do things like share videos of (black conservative) Candace Owens and shill for Trump (I voted for Gary Johnson, and I've always been bored by political hackery). The latter category of user mutes those of the former category, apparently.

So, feeling desperate for an alternative to Twitter, I spent a few hours today on the site, mostly muting racists, and a bit of getting introduced to some people who assured me that most of the people on the site were decent and non-racist, and that what you had to do was—especially in the beginning—spend a lot of time doing just what I was doing, muting racists.

Boy, are there a lot of racists (or maybe faux racists) there to mute. I still haven't gotten to the end of them.

But I'm not giving up on Gab.ai, not yet. Maybe it'll change, or my experience will get better. A lot of people there assured me that it would. I love that it's as committed to free speech as it is, and I wouldn't want to censor all those racists and Nazis just as I wouldn't want to censor Antifa and Communists. Keep America weird, I say!

If it's not Gab.ai, I do think some other network will rise. Two others I need to spend more time on are Steemit.com, a blockchain blogging website, similar to Medium and closely associated with EOS and Block.one, and Mastodon.social, which is sort of a cross between Twitter and Facebook. Steemit has become pretty popular (more so than Gab.ai), while Mastodon has unfortunately been struggling. I also want to spend more time on BitChute, a growing and reasonably popular YouTube competitor.


I am informed; you are misinformed; and the government should do something about this problem

Poynter, the famous journalism thinktank, has published "A guide to anti-misinformation actions around the world." This sort of thing is mostly interesting not just for the particular facts it gathers but also for the assumptions and categories it takes for granted. The word "misinformation" is thrown around, as are "hate speech" and "fake news." The European Commission, it seems, published a report on "misinformation" (the report itself says "disinformation") in order to "help the European Union figure out what to do about fake news." Not only does this trade on a ridiculously broad definition of "disinformation," it assumes that disinformation is somehow a newly pronounced or important problem, that it is the role of a supernational body (the E.U.) to figure out what to do about this problem, and that it is also the role of that body to "do something." Mind you, there might be some government "actions" that strike me as being possibly defensible; but the majority that I reviewed looked awful.

For example, look at what Italy has done:

A little more than a month before the general election, the Italian government announced Jan. 18 that it had set up an online portal where citizens could report fake news to the police.

The service, which prompts users for their email addresses, a link to the fake news and any social media networks they saw it on, ferries reports to the Polizia Postale, a unit of the state police that investigates cyber crime. The department will fact-check them and — if laws were broken — pursue legal action. At the very least, the service will draw upon official sources to deny false or misleading information.

That plan came amid a national frenzy over fake news leading into the March 4 election and suffered from the same vagueness as the ones in Brazil, Croatia and France: a lacking definition of what constitutes "fake news."

Poynter, which I think it's safe to say is an Establishment thinktank, mostly just dutifully reports on these developments. In their introduction, they do eventually (in the fourth paragraph) get around to pointing out some minor problems with these government efforts: the difficulty of defining "fake news" and, of course, that pesky free speech thing.

That different countries are suddenly engaging in press censorship is only part of the news. The other part is that Poynter, representing the journalistic Establishment, apparently does not find it greatly alarming about "governments" that are "taking action." Well, I do. Just consider the EU report's definition of "disinformation":

Disinformation as defined in this Report includes all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit.

This implies that if in the opinion of some government authority, some claim is merely false and, like most professional publishing operations it is published for profit, then it counts as disinformation. This means that (with an exception made for non-profit publishers, apparently) the E.U. considers anything false to be an item of disinformation, and thus presumably ripe for some sort of regulation or sanction.

Well, of course this sounds ridiculous, but I am just reading. It's not my fault if that's what the report says. I mean, I'm sorry, but it certainly does look as if the E.U. wants to determine what's false and to then to ban it (or something). Of course, the definition does first say that disinformation is designed to intentionally cause public harm, but anybody who reads legalistic texts needs to bear in mind that, as far as the law is concerned, the parts that come after "or" and "and" are just as important as the parts that come before. The text does say "or for profit." Is that because in the E.U., seeking profit is as suspect as intentionally causing public harm?

The difficulty about texts like this, aside from the fact that they are insufferably dull, is that they are so completely chock-full of bad writing, bad reasoning, false assumptions, and so forth, that it would take several volumes to say everything that needs saying about the E.U. report and Poynter's run-down of government actions. What about all the important issues associated with what looks like a worldwide crackdown on free speech? They have been solved, apparently.

Poynter at least has the good sense to acknowledge difficulties, as they do at the end of the discussion of Italy's regulatory scheme. The government positions are appalling, as if they were saying: "We know what fake news and disinformation and misinformation are, more or less. Sure, there's a small intellectual matter of defining them, but no big deal there. It's just a matter of deciding what needs to be done. Free speech, well, that's just another factor to be weighed."

Just imagine reading this page just twenty years ago. It would have been regarded as an implausible horrorshow. I imagine how someone might have responded to a glimpse 20 years into the future:

What are you saying--in 2018, countries all around the world will decide that it's time to start seriously cracking down on "misinformation" because it's too easy to publish false stuff online, and free speech and freedom of the press? That's ridiculous. It's one thing to get upset about "political incorrectness," but it's another thing altogether for the freedom-loving West, and especially for journalists (for crying out loud!) to so bemoan "hate speech" and "fake news" (really?) that they'll give up free speech and start calling on their governments to exert control. That's just...ridiculous. Do you think we'll forget everything we know about free speech and press freedom in 20 years?

Well, it would have been ridiculous in 1998. Twenty years later, it still should be, but apparently it isn't for so many sophisticated, morally enlightened leaders who can identify what is true and what is misinformation.

It's time to push back.


Is it time to move from social media to blogs?

This began as a Twitter thread.

I've finally put my finger on a thing that annoys me—probably, all of us—about social media. When we check in on our friends and colleagues and what they're sharing, we are constantly bombarded with simplistic attacks on our core beliefs, especially political beliefs. "This cannot stand," we say. So we respond. But it's impossible to respond in the brief and fast-paced media of Twitter and Facebook without being simplistic or glib. So the cycle of simplistic glibness never stops.

There are propagandists (and social media people...but I repeat myself) who love and thrive on this simplicity. Their messages are more plausible and easier to get upset about when stated simply and briefly. They love that. That's a feature, not a bug (they think)!

I feel like telling Tweeps and FB friends "Be more reasonable!" and "Use your brain!" and "Chill!" But again—everything seems sooooo important, because our core beliefs are under attack. How can most people be expected to be calm and reasonable? People who take high standards of politeness and methodology seriously naturally feel like quitting. But social media has become important for socializing, PR, career advancement, and (let's face it) the joy of partisanship. "I can't quit you!" we moan. But, to quote a different movie, this aggression will not stand, man. Our betters at Twitter and Facebook agree, and so they have decided to force the worst actors to play nice. But they can't be trusted to identify "the worst actors" fairly. They're choosing the winners.

What's the solution for those of us who care about truth, nuance, and decency—and free speech? I don't know, but I have an idea. Rather than letting Facebook and Twitter (and their creeping censorship) control things, I'm going to try putting content updates on my blog. I'll still use Twitter and Facebook for Everipedia announcements and talk, and I'll link to blog updates from both places. But you'll have to visit my blog to actually read my more personal content. Anyway, I'm going to give that a try.


The Well-Ordered Life

The well-ordered life may be defined as that set of sound beliefs and good practices which are most conducive to productivity and therefore happiness, at least insofar as as happiness depends on productivity.

The well-ordered life has several types of component: goals; projects, which naturally flow from goals, and which are essentially long-term plans; habits, or actions aimed at the goals and which one aims to do regularly; plans for the day or week; assessments, or evaluating the whole, or stock-taking; and, different from all of these, a set of beliefs and states of attention that support the whole.

Let me explain the general theory behind the claim that the beliefs and practices I have in mind do, in fact, conduce to productivity and thus happiness. There is a way to live, which many of us have practiced at least from time to time and which some people practice quite a lot, which has been variously described as “peak performance,” “getting things done,” “self-discipline,” or as I will put it, a “well-ordered life.”

This generally involves really accepting, really believing in, certain of what might be called life goals. If you do not believe in these goals, the whole thing breaks down. Next, flowing from these goals, you must embrace certain projects; the projects must be broad and long-term, meaning they incorporate many different activities but have a definite end point. These must be tractable and perfectly realistic, and again, you must be fully “on board” with the wisdom of these projects. Projects can include things like writing a paper, working through a tutorial, writing a large computer program, and much more, depending on your career.

This background—your global goals and your significant, big projects—is the backdrop for your daily life. If this backdrop is not well-ordered, then your daily life will fall apart. If you lose faith in your goals, little everyday activities will be hard to do. Similarly, if you decide that a certain project does not serve your goals, you will not be able to motivate yourself to take actions. So you must guard your commitment to your goals and projects jealously, and if it starts to get shaky, you need to reassess as soon as possible.

Your daily life is structured by three main things: habits, plans, and assessments. Your habits are like the structure of your day. They can and probably should include a schedule and are regular activities that move you toward the completion of a project. Plans are like the content of your day. The habits and schedule might give you an outline, but you still need to think through how to flesh out the outline. Finally, there are assessments, which can be done at the same time as plans are done, but which involve evaluating your past performance, introspecting about how you feel about everything, and frankly squashing irrational thoughts that are getting in the way.

Such a life is well-ordered because projects flow from goals, while habits, plans, and assessments are all in service of the projects and, through them, the goals. It is a system with different parts; but the parts all take place in your life, meaning they at bottom take the form of beliefs and actions that you strongly identify with and that actually make up who you are.

This then leads to the last element of the well-ordered life: beliefs that support the whole. As we move through life, we are not in direct control of our beliefs or even most of our actions. We find ourselves believing things or with attitudes that we do not wish or that even surprise or dismay us. These beliefs can greatly support a well-ordered life, but they can also undermine it entirely. If you believe a goal is entirely unattainable, or a project undoable, you will probably lack the motivation needed to pursue it.

This is why assessment, or stock-taking, is so important if you are to maintain a well-ordered life, especially if you tend to be depressed or nervous or your self-confidence is low. You need to explore and, as it were, tidy up your mind.

You should expel any notion that self-discipline is a matter of luck, as if some people have it and others don’t. It is also an error to think self-discipline is a matter of remembering some brilliant insight you or someone else had, or staying in the right frame of mind. Indeed, self-discipline is not any one thing at all. It is, as I said, a system, with various working parts.

It is true that some people just rather naturally fall into the good habits and beliefs that constitute the well-ordered life. But the vast majority of us do not. The better you understand these parts, bear them in mind, and work on them until the whole thing is a finely-tuned machine, the more control you’ll have over your life. This is not easy and, like any complex system, a lot can go wrong. That’s why it’s so necessary to take stock and plan.


How to crowdsource videos via a shared video channel

I got to talking to one of my colleagues here at Everipedia, the encyclopedia of everything, where I am now CIO, about future plans. I had the following idea.

We could create an Everipedia channel--basically, just a YouTube account, but owned by Everipedia and devoted to regularly posting new videos.

We could invite people to submit videos to us; if they're approved, we put branding elements on them and post them. We share some significant amount of the monetization (most of it) with the creator.

We also feature the videos at the top of the Everipedia article about the topic.

Who knows what could happen, but what I  hope would happen is that we'd get a bunch of subscribers, because of all the connections of the video makers (and Everipedia--we collectively have a lot of followers and a lot of traffic). And the more people we got involved, the greater the competition and the better the videos would be.

There are still huge opportunities in the educational video space--so many topics out there simply have no good free videos available.

Others must have organized group channels like this before, but I can't think of who.

What do you think?


Could God have evolved?

1. How a common argument for the existence of God failed—or did it?

As a philosophy instructor, I often taught the topic of arguments for the existence of God. One of the most common arguments, called the argument from design or teleological argument, in one formulation compares God to a watchmaker.

If you were walking along a beach and found some complex machine that certainly appeared to be designed by someone, which did something amazing, then you'd conclude that it had a maker. But here we are in a universe that exhibits far more complexity and design than any machine we've ever devised. Therefore, the universe has a maker as well; we call it God.

This is sometimes called the Watchmaker Argument—since the mechanism our beachcomber finds is usually a watch—and is attributed to William Paley. Variations on this theme could be the single most commonly-advanced argument for God.

The reason the Watchmaker Argument doesn't persuade a lot of philosophers—and quite a few scientists and atheists generally—is that all the purported signs of design can be found in the biological world, and if biological complexity and appearance of design can be explained by natural selection, then God is no longer needed as an explanatory tool.

Some skeptics go a bit further and say that all the minds we have experience of are woefully inadequate for purposes of designing the complexity of life. Therefore, not only are natural mechanisms another explanation, they are a much better explanation, as far as our own experience of minds and designing is concerned.

But here I find myself skeptical of these particular skeptics.

2. Modern technology looks like magic

Recently, probably because I've been studying programming and am understanding the innards of technology better than ever, it has occurred to me very vividly that we may not be able to properly plumb the depths of what minds are capable of achieving. After all, imagine what a medieval peasant would make of modern technology. As lovers of technology often say, it would look like magic, and we would look like gods.

We've been working at this scientific innovation thing for only a few centuries, and we've been aggressively and intelligently innovating technology for maybe one century. Things we do now in 2017 are well into the realm of science fiction of 1917. We literally cannot imagine what scientific discovery and technological innovation will make available to us after 500 or 1000 years. Now let's suppose there are advanced civilizations in the galaxy that have been around for a million years.

Isn't it now hackneyed to observe that life on Earth could be a failed project of some super-advanced alien schoolchild? After all, we already are experimenting with genetic engineering, a field that is ridiculously young. As we unlock the secrets of life, who's to say we will not be able to engineer entirely different types of life, every bit as complex as the life we find on Earth, and to merge with our inventions?

Now, what havoc should these reflections wreak on our religious philosophy?

3. Could an evolved superbeing satisfy the requirements of our religions?

The scientific atheist holds the physical universe in great reverence, as something that exists in its full complexity far beyond the comprehension of human beings. The notion of a primitive "jealous God" of primitive religions is thought laughable, in the face of the immense complexity of the universe that this God is supposed to have created. Our brains are just so much meat, limited and fallible. The notion that anything like us might have created the universe is ridiculous.

Yet it is in observing the development of science and technology, thinking about how we ourselves might be enhanced by that science and technology, that we might come to an opposite conclusion. Perhaps the God of nomadic tent-dwellers couldn't design the universe. But what if there is some alien race that has evolved past where we are now for millions of years. Imagine that there is a billion-year-old superbeing. Is such a being possible? Consider the invention, computability, genetic engineering, and technological marvels we're witnessing today. Many sober heads think the advent of AI may usher in the Singularity within a few decades. What happens a millions years after that? Could the being or beings that evolve create moons? Planets? Suns? Galaxies? Universes?

And why couldn't such a superbeing turn out to be the God of the nomadic tent-dwellers?

Atheists are wrong to dismiss the divine if they do so on grounds that no gods are sufficiently complex to create everything we see around us. They believe in evolution and they see technology evolving all around us. Couldn't god-like beings have evolved elsewhere and gotten here? Could we, after sufficient time, evolve into god-like beings ourselves?

What if it turns out that the advent of the Singularity has the effect of joining us all to the Godhead that is as much technological as it is physical and spiritual? And suppose that's what, in reality, satisfies the ancient Hebrew notions of armageddon and heaven, and the Buddhist notion of nirvana. And suppose that, when that time comes, it is the humble, faithful, just, generous, self-denying, courageous, righteous, respectful, and kind people that are accepted into this union, while the others are not.

4. But I'm still an agnostic

These wild speculations aren't enough to make me any less of an agnostic. I still don't see evidence that God exists, or that the traditional (e.g., Thomistic) conception of God is even coherent or comprehensible. For all we know, the universe is self-existing and life on Earth evolved, and that's all the explanation we should ever expect for anything.

But these considerations do make me much more impressed by the fact that we do not understand how various minds in the universe might evolve, or might have evolved, and how they might have already interacted with the universe we know. There are facts about these matters about which we are ignorant, and the scientific approach is to withhold judgment about them until the data are in.


On intellectual honesty and accepting the humiliation of error

I. The virtue of intellectual honesty.
Honesty is a greatly underrated epistemic virtue.

There is a sound reason for thinking so. It turns out that probably the single greatest source of error is not ignorance but arrogance, not lack of facts but dogmatism. We leap to conclusions that fit with our preconceptions without testing them. Even when we are more circumspect, we frequently rule out views that turn out to be correct because of our biases. Often we take the easy way out and simply accept whatever our friends, religion, or party says is true.

These are natural habits, but there is a solution: intellectual honesty. At root, this means deep commitment to truth over our own current opinion, whatever it might be. That means accepting clear and incontrovertible evidence as a serious constraint on our reasoning. It means refusing to accept inconsistencies in one's thinking. It means rejecting complexity for its own sake, whereby we congratulate ourselves for our cleverness but rarely do justice to the full body of evidence. It means following the evidence where it leads.

The irony is that some other epistemic virtues actually militate against wisdom, or the difficult search for truth.

Intelligence or cleverness, while in themselves an obvious benefit, become a positive hindrance when we become unduly impressed with ourselves and the cleverness of our theories. This is perhaps the single biggest reason I became disappointed with philosophy and left academe; philosophers are far too impressed with complex and clever reasoning, paying no attention to fundamentals. As a result, anyone who works from fundamentals finds it to be child's play (I thought I did, as a grad student) to poke holes in fashionable theories. This is not because I was more clever than those theoreticians but because they simply did not care about certain constraints that I thought were obvious. And it's easy for them in turn to glibly defend their views; so it's a game, and to me it became a very tiresome one.

Another overrated virtue is, for lack of a better name, conventionality. In every society, every group, there is a shared set of beliefs, some of which are true and some of which are false. I find that in both political and academic discussions, following these conventions is held to be a sign of good sense and probity, while flouting them ranges from suspect to silly to evil. But there has never yet been any group of people with a monopoly on truth, and the inherent difficulty of everything we think about means that we are unlikely to find any such group anytime soon. I think most of my liberal friends are—perhaps ironically—quite conventional in how they think about political issues. Obviously conservatives and others can be as well.

Another virtue, vastly overrated today, is being "scientific." Of course, science is one of the greatest inventions of the modern mind, and it continues to produce amazing results. I am also myself deeply committed to the scientific method and empiricism in a broad sense. But it is an enormous mistake to think that the mere existence of a scientific consensus, especially in the soft sciences, means that one may simply accept what science instructs is true. The strength of a scientific theory is not determined by a poll but by the quality of evidence. Yet the history of science is the history of dogmatic groups of scientists having their confidently-held views corrected or entirely replaced. The problem is a social one; scientists want the respect of their peers and as a result are subject to groupthink. In an age of scientism this problem bleeds into the general nonscientific population, with dogmatists attempting to support their views by epistemically unquestionable (but often badly-constructed and inadequate) "studies"; rejecting anyone's argument, regardless how strong, if it is not presented with "scientific support"; and dismissing any non-scientist opining on a subject about which a scientist happens to have some opinion. As wonderful as science is, the fact is that we are far more ignorant than we are knowledgeable, even today, in 2017, and we would do well to remember that.

Here's another overrated virtue: incisiveness. Someone is incisive if he produces trenchant replies that allows his friends to laugh at the victims of his wit. Sometimes, balloons need to be punctured and there is nothing there when deflated—of course. But problems arise when glib wits attack some more complex theories and narratives. It is easy to tear down and hard to build. Fundamentally my issue is that we need to probe theories and narratives that are deeply rooted in facts and evidence, and simply throwing them on the scrap heap in ridicule means we do not fully learn what we can from the author's perspective. In philosophy, I'm often inclined to a kind of syncretistic approach which tips its hat to various competing theories that each seem to have their hands on different parts of the elephant. Even in politics, even if we have some very specific policy recommendation, much has been lost if we simply reject everything the other side says in the rough and tumble of debate.

I could go on, but I want to draw a conclusion here. When we debate and publish with a view to arriving at some well-established conclusions, we are as much performing for others as we are following anything remotely resembling an honest method for seeking the truth. We, with the enthusiastic support of our peers, are sometimes encouraged to think that we have the truth when we are still very far indeed from having demonstrated it. By contrast, sometimes we are shamed for considering certain things that we should feel entirely free to explore, because they do contain part of the truth. These social effects get in the way of the most efficient and genuine truth-seeking. The approach that can be contrasted with all of these problems is intellectual honesty. This entails, or requires, courageous individualism, humility, integrity, and faith or commitment to the cause of truth above ideology.

It's sad that it is so rare.

 

II. The dangers of avoiding humiliation.

The problem with most people laboring under error (I almost said "stupid people," but many of the people I have in mind are in fact very bright) is that, when they finally realize that they were in error, they can't handle the shame of knowing that they were in error, especially if they held their beliefs with any degree of conviction. Many people find error to be deeply humiliating. Remember the last time you insisted that a word meant one thing and it meant something else, when you cited some misremembered statistic, or when thought you knew someone who turned out to be a stranger. It's no fun!

Hence we are strongly motivated to deny that we are, in fact, in error, which creates the necessity of various defenses. We overvalue supporting evidence ("Well, these studies say...") and undervalue disconfirming evidence ("Those studies must be flawed"). Sometimes we just make up evidence, convincing ourselves that we just somehow know things ("I have a hunch..."). We seek to discredit people who present them with disconfirming evidence, to avoid having to consider or respond to it ("Racist!").

In short, emotional and automatic processes lead us to avoid concluding that we are in error. Since we take conscious interest in defending our views, complex explanatory methods are deployed in the same effort. ("Faith is a virtue.") But these processes and methods, by which we defend our belief systems, militate in favor of further error and against accepting truth. ("Sure, maybe it sounds weird, but so does a lot of stuff in this field.") This is because propositions, whether true or false, tend to come in large clusters or systems that are mutually supporting. Like lies, if you support one, you find yourself committed to many more.

In this way, our desire to avoid the humiliation of error leads us into complex systems of confusion—and, occasionally, into patterns of thinking that can be called simply evil. ("The ends justify the means.") They're evil because the pride involved in supporting systematically wrong systems of thought drives people into patterns of defense go beyond the merely psychological and into the abusive, psychologically damaging, and physical. ("We can't tolerate the intolerant!" "Enemy of the people." "Let him be anathema.")

What makes things worse is that we are not unique atoms each confronting a nonhuman universe, when we are coming to grips with our error. We are members of like-minded communities. We take comfort that others share our beliefs. This spreads out the responsibility for the error. ("So-and-so is so smart, and he believes this.") It is much easier to believe provably false things if many others do as well, and if they are engaged in the same processes and methods in defending themselves and, by extension, their school of thought.

This is how we systematically fail to understand each other. ("Bigot!" "Idiot!") This is why some people want to censor other people. ("Hate speech." "Bad influence.") This is how wars start.

Maybe, just maybe, bad epistemology is an essential cause of bad politics.

(I might be wrong about that.)

It's better to just allow yourself to be humiliated, and go where the truth leads. This is the nature of skepticism.

This, by the way, is why I became a philosopher and why I commend philosophy to you. The mission of philosophy is—for me, and I perhaps too dogmatically assert that it ought to be the mission for others—to systematically dismantle our systems of belief so that we may begin from a firmer foundation and accept only true beliefs.

This was what Socrates and Descartes knew and taught so brilliantly. Begin with what you know on a very firm foundation, things that you can see for yourself ("I know that here is a hand"), things that nobody denies ("Humans live on the surface of the earth"). And as you make inferences, as you inevitably will and must, learn the canons of logic and method so that you can correctly apportion your strength of belief to the strength of the evidence.

There is no way to do all this without frequently practicing philosophy and frequently saying, "This might or might not support my views; I don't know." If you avoid the deeper questions, you are ipso facto being dogmatic and, therefore, subject to the patterns of error described above.


Modern education and culture, or, what did you think would happen?

I. Modern education and culture

Look at where we are in education and culture today. Let's catalog the main issues, shall we?

School children are often not taught to read properly, and too many fall behind and grow up functionally illiterate. Yet students are kept in schools practically all day and are made to do endless amounts of busywork, and then they have to do even more busywork at home. The efficiency of the work they do is appalling, as their textbooks and assignments are all too often ill-conceived, being repetitious, deadly dull, and designed without any consideration for what individual children already know (or don't know). Generally, they aren't taught classics (but more on that below). So despite all that work, despite graduating at rates as high as ever, the average child emerges into adulthood shockingly ignorant. The educational process is regimented; little humans have essentially become cogs in a giant, humorless, bureaucratic machine. The whole process is soul-killing.

Growing up in these bureaucratized intellectual ghettos, it's no wonder that rebellion has become de rigeur, that everyone calls himself an individualist although few really are. Popular culture with each passing generation is more dumbed-down, delivering entertainment that can be effortlessly consumed by maleducated conformist rebels, increasingly avoiding any scintilla of intellectualism, any uncool and boring reference to any of the roots of Western culture. On TV, popular music, and the Internet—the ubiquitous refuges of the young from the horrors of the educational machine that dominates their young lives—one can navigate content of all sorts without any exposure to the classics of literature and the arts, or the root ideas of Western religion and philosophy. If a few lucky students are exposed to these things at their more academic high schools, most are not, and the taste for "the best which has been thought and said" is ruined by the presentation in a system that "critiques" and renders dull as much as it celebrates and usefully explains. It's a wonder if any students emerge with any taste for the classics of Western literature, art, and thought at all.

A problem about Western culture, for the modern world, is that it is intensely critical and challenging. The classics are beautiful, but hard—both difficult to appreciate and presenting lessons that require us to take a hard, critical look at ourselves. Although the classics can be profoundly inspiring and sublime in beauty, they require time, attention, intelligence, seriousness, and sincerity to appreciate. In the context of today's soul-killing schools, students are too exhausted and overworked to meet these challenges. Many students are also too narcissistic—having been told by their parents and teachers that they are already brilliant, having been idolized by popular culture for their cool, attractiveness, and cutting-edge thinking about everything—so the classics require a kind of self-criticism that is wholly foreign to many of them. It is no wonder the classics simply do not "speak to" the youth of today.

Moreover, almost all of the classics were created by white Western men. Spending much time on them is politically regressive, or that is what school teachers are trained to believe. Instead, the left at universities have been building a new kind of more critical culture, at once holding up the grievances of historically marginalized groups as a new gospel, while actually revering popular culture. Teachers and administrators marinade in this left-wing culture of criticism at universities for six or more years, before they make the choices of what pieces of culture are worth exposing to children. So, again, it's a wonder if any students emerge with any taste for the classics.

At the college level, matters have become dire in other ways. Everyone is expected to go to college, and at the same time universities have become corporatized, so that the students are now treated as "customers" whose evaluations determine how professors should teach. So, naturally, grades have inflated—which would have been necessary to coddle the "self-esteem" or narcissism of youth—and the courses themselves have been dumbed down, at least in the humanities. But who needs the humanities? Degrees in the liberal arts generally are held to be a waste of money, especially since college has become so expensive, and fewer people are pursuing such degrees. Even if one believed the knowledge gained through liberal arts degrees to be valuable enough to warrant spending $60,000/year, one spends much of the time, in most of the humanities, marinading in that same left-wing critical culture that produces our schoolteachers—so one wouldn't be exposed to the classics in the way that would incline a student to sign up for one of these degrees in the first place. So it's no wonder if students and their parents are finding it increasingly plausible to skip college altogether. This is a sad mistake, considering that young adults today, navigating a rapidly-changing world, are more in need of the wisdom and intellectual skills inculcated by a liberal arts education than ever before. And most recently, the consequences of our failure to pass on two of the ideals essential to Western thought—free speech and freedom of inquiry—has led to thoroughly illiberal efforts to "shut it down," i.e., prevent politically unpopular ideas from getting a hearing on campus at all. This is all in the name of intersectionality, empowering the disempowered, tearing down bad old ideas, and protecting the sensitive feelings of coddled students.

II. The once-radical ideas that got us here

Our education is degraded, and we are falling away from Western civilization. So how did it come to this? I put it down to a perfect storm of terrible ideas.

(1) To be effective in a fast-changing society, we need up-to-date know-how, not theory. American society developed out of a frontier mentality that placed a premium on a "can-do" attitude, an ability to get things done, with theorizing and book-reading being a waste of time. That might be understandable for the pioneers and peasants of a frontier or pre-industrial society, it is a terrible idea for the complexities of industrial and post-industrial societies, in which wisdom, trained intelligence, and sensitivity to nuance are essentials. Nevertheless, American parents and teachers alike generally seem to agree that practical knowledge and know-how are more important than book-larnin'. You would think that this might have changed with more people than ever going to college. But it has not.

(2) Books are old-fashioned in the Internet age. When, in the 2000s, the Internet came into its own as the locus of modern life, we began to ask, "Is Google making us stupid?" and to "complain" that we lacked the ability to read extended texts (long articles were "tl;dr" and books boring, old, and irrelevant). I think many of us took this to heart. Educated people still do want their children to read, but the habits of adults are slowly dying; you can't expect the children to do better.

(3) Western civilization is evil. "Hey, hey, ho, ho, Western civ has got to go," chanted those Stanford students in 1988, which became a watershed moment in the development of Western culture. At the time, it might have seemed a bit of left-wing excess, and just one side of the complex Culture War. But, in fact, it proved to be a taste of things to come. Many Western civilization requirements are long gone. What was once the province of the newly-created Women's Studies and Black Studies departments, and a few left-wing professors, gradually become the dominant viewpoint in all of the humanities. Why study the classics when the classics simply represent the point of view of the oppressor?

(4) Social justice is the new religion. Hand-in-hand with criticism of Western civilization came an increasing respect (which is good), then celebration (which is fine), and finally a veneration (which is undeserved) of everything that has traditionally been set in opposition to Western civilization, especially the usual identity groups: women, races other than white, ethnicities other than Western, religion other than Christianity, sexual orientation other than straight, etc. At universities, making these identity groups equal to straight, white, male, Christian Europeans has become nearly the only thing—apart from environmentalism and a few other such causes—that is taken really seriously. For many academics, intersectionality has replaced both religion and any apolitical ethics to become an all-encompassing worldview.

(5) Psychology is more scientific, accurate, and credible than philosophy and religion, and self-esteem must be cultivated at all costs. The gospel of self-esteem came into being in the 1970s, right around the time when the self-help publishing industry became fashionable. With the collapse of traditional (especially Christian) belief systems, people cast about for general advice on how to live their lives, and psychology delivered. As self-esteem was a key element of much self-help psychology, it was only natural that the parents of Generations X and Y would pull out the stops to protect the feelings and sense of self-worth of their precious darlings.

We have changed. Despite their education, too many of our children cannot read well, and fewer and fewer of us read books. Whatever we do teach or read, it is rarely classical literature. Classics have become an unexplored country, dull and reviled, to many of us. Recent generations are the first in centuries in which the upper echelons of society are quite shockingly ignorant of their own Western heritage. And here I don't just mean books, I mean also basic Western principles, ideas, and values. For many young people, social justice, psychology, and especially popular culture have replaced religion and wisdom literature. Popular culture may be a crass wasteland, yet it guides our youth more than ever, as being the only kind of culture that most of them have preparation and taste for.

We have declined. In past generations, this analysis would have sounded like scaremongering. Today, the analysis has come true; it is a postmortem.

But—and here I speak to the older generation, especially educated old liberals—what did you think would happen? This is precisely what some people did predict in decades past, because society's leaders were teaching a certain set of ideas to the leaders of the next generation:

European civilization colonized and exploited the world; it is irredeemably racist and the main source of the suffering in the world today.

Inequalities are deeply unfair, and white men have the best of everything; so we should celebrate everyone else and take white men down a peg or two.

We must be avoid saying anything that might even be thought to be offensive to disadvantaged identity groups.

Christianity is completely irrational and doesn't deserve a role in public life.

Science, and psychology in particular, studies all we need to know to live and be happy; philosophy and religion are based on muddle-headed superstition.

The self-esteem and sensitivities of young people are precious and must be protected from the buffets that life threatens to give them.

Even today, some of these ideas might sound ridiculous to some of us. But if you've been paying attention, you can't deny that these once-radical ideas have become increasingly mainstream.

III. The radical ideas that might guide our future

The desperate state of education today is predictable, given former trends and earnestly-expressed convictions. It was called scaremongering to say that these ideas were hacking away the roots of Western civilization—and yet they did. So one wonders: What can we predict about the future, based on ideas now growing in popularity, ideas that it is quite reasonable to believe will guide the education and enculturation of the next generation?

Here are some controversial ideas that are in vogue at universities today:

Free speech is a dangerous idea, and it certainly doesn't include hate speech and harmful speech.

What determines whether speech is harmful is whether it causes its listeners to react with emotional pain.

But we can disregard the pain of "privileged" people—"male tears," "white tears," and all that.

Those who are really plugged in know that books aren't really what's important. Know-how is what's important. You can just look up things online that you need to know.

Popular culture is worth careful academic study, at least as much as "the classics" or "high culture."

Higher education isn't important except as a credential to become a corporate drone and in some fields.

Grave inequalities persist, and our very civilization is racist. We ought to tear down and malign all the productions of white men.

White society, and white people (whether they know it or not), are all racist, and all men (whether they know it or not) perpetuate a sexist patriarchy.

Religion isn't just irrational and wrong, it's evil, and we should take steps to stamp it out and perhaps prohibit it.

Reproducing does great harm to the world. Life is an evil. Babies are not to be celebrated. We should stop having them.

All of these ideas have plenty of adherents on campus today. They might well shape the next generation. If so, what might our brave new world look like? Let's listen in to the monologue from a typical, center-left future student, shall we?

"It's 2047. The way some people talk, you'd think it was, I don't know, 2017 or something. Check this out. I heard someone, and I don't care if she was a black woman, actually citing the Bible in class? That triggered a lot of people, and she was kicked out, of course. I doubt they'll let her back in. It just goes to show you how many people still believe that superstitious bullshit, even though it's revolting hate speech. But you know what, I was kind of impressed about what she was reading, before I realized what she was reading. It sounded like Old English. Who reads crap like that these days? Well, I guess she can. But it's still bullshit. You don't have to be able to read it to know that.

"It's not just superstitious bullshit, it's totally irrelevant. Books are so lame! My favorite professors don't teach books, they teach modern media. When I started this major, I swear, I had no idea pop music and movies were so deep. Seriously! So why do we require students to read so many books at all? Last year I was required to read three books for required Communications courses. Everyone knows that books aren't really what's important; knowledge is free for the taking online. Everything's there, instantly! Besides, the most influential thoughts of the last forty years are all in the form of briefer texts online. I'm thinking I might want to drop out. Half of my friends didn't even go to college and are just being trained by their employers. But you know, I think those tend to be the more conservative people, you know? So...

"Anyway, at the very least, it's time to stop requiring that we read any books written before 1970, or maybe 2000, especially if they were written by white men. I mean, of course white people and men are still welcome at our universities, it is perfectly fair that they wait their turn in classroom discussions. I hate it when some white man just starts talking first. You can hear some people hissing when they do. After all, everyone knows that less privileged people have more valid and relevant perspectives, and hearing white people and men—and on some issues, let's face it, hearing ignorant, insensitive white men at all—causes the marginalized great pain. We can't forget that white Western civilization persists even today, despite our best efforts. We renamed the state of Washington, but not the capital of our country—it continues to be named after the very embodiment of a white, slave-owning, breeding patriarch! That pisses me off so much!

"And speaking of breeders...don't get me started on the breeders. We had to fight tooth and nail against the misogynist, patriarchal society just to make it possible to license parents. But now we're allowing almost everyone to be licensed. What's the point? Surely we've got to prevent so many people from breeding. We don't let just anyone drive, right? We need to start imposing some restrictions. I know it's a little simplistic, but sometimes, simple is the best way: we could just, for a while, restrict the number of children white people could have. I know it sounds shocking, but look—everybody knows they use the most resources, they're the most racist, they create the most inequality. And they're still a plurality in this country. So it's really a no-brainer. It's 2047!"

Maybe that sounds over-the-top. But that's the point. There are cutting-edge activist types who would find all of this commendable or at least very plausible. And just think: the cutting-edge ideas of 1987, which would have sounded totally bizarre and radical back then, are totally up-to-date today, in 2017. I'm similarly extrapolating, from the "cutting-edge" ideas of today on the same topics to how those ideas might be evolve in another 30 years.

Also, of course, it could get much worse. Illiberal societies have been much worse at different times and places in history.

Am I predicting that the monologue is what awaits us? No, my crystal ball isn't that accurate and history never unfolds smoothly or predictably. What I'm saying is that it's a natural extrapolation from ideas about education and culture today. Is that what we want? If not, then what kind of thought world are we trying to build?


On the Purposes of the Internet

SISCTI 34
February 28, 2009
Monterrey, Mexico

Introduction

I am going to begin by asking a philosophical question about the Internet. But I can hear some of you saying, “Philosophy? What does that have to do with the Internet? Maybe I will have a siesta.” Well, before you close your eyes, let me assure you that the question is deeply important to some recent debates about the future of the Internet.

The question is: what is the purpose of the Internet? What is the Internet good for? Perhaps you had never thought that something as vast and diverse as the Internet might have a single purpose. In fact, I am going to argue that it has at least two main purposes.

To begin with, think about what the Internet is: a giant global information network. To ask what the Internet is for is about the same as asking what makes information valuable to us, and what basic reasons there might be for networking computers and their information together.

 

The two purposes of the Internet: communication and information

I think the Internet has at least two main purposes: first, communication and socialization, and second, finding the information we need in order to learn and to live our daily lives. In short, the Internet is for both communication and information.

Let me explain this in a simple way. On the one hand, we use the Internet for e-mail, for online forum discussions, for putting our personalities out there on social networking sites, and for sharing our personal creativity. These are all ways we have of communicating and socializing with others.

On the other hand, we are constantly looking things up on the Internet. We might check a news website, look up the meaning of a word in an online dictionary, or do some background reading on a topic in Wikipedia. These are all ways of finding information.

I want to explain an important difference between communication and information. Communication is, we might say, creator-oriented. It’s all about you, your personal needs and circumstances, and your need for engagement and recognition. So communication is essentially about the people who are doing the communicating. If we have no interest in some people, we probably have no interest in their communications. This is why, for example, I have zero interest in most MySpace pages. Almost nobody I know uses MySpace. MySpace is mainly about communication and socialization, and since I’m not actually communicating or socializing with anybody on that website, I don’t care about it.

Information, on the other hand, is not about the person giving the information but about the contents of the information. In a certain way, it really does not matter who gives the information; all that matters is that the information is valid and is of interest to me. And the same information might be just as interesting to another person. So, we might say, communication is essentially personal, and information is essentially impersonal.

I say, then, that the Internet’s purposes are communication and information. In fact, the Internet has famously revolutionized both.

The Internet is addictive largely because it gives us so many more people to talk to, and we can talk to them so efficiently. It allows us to compare our opinions with others’, to get feedback about our own thinking and creative work. In some ways, the Internet does this more efficiently than face-to-face conversation. If we are interested in a specific topic, we do not need to find a friend or a colleague who is interested in the topic; we just join a group online that has huge numbers of people already interested, and ready to talk about the topic endlessly.

Online discussions of serious topics are often a simplistic review of research, with a lot of confused amateur speculation thrown in. We could, if we wanted to, simply read the research—go to the source material. But often we don’t. We often prefer to debate about our own opinions, even when we have the modesty to admit that our opinions aren’t worth very much. Discussion is preferred by many people; they prefer active discussion over passive absorption. Who can blame them? You can’t talk back to a scientific paper, and a scientific paper can’t respond intelligently to your own thoughts. The testing or evaluation of our own beliefs is ultimately what interests us, and this is what we human beings use conversation to do.

But the Internet is also wonderfully efficient at delivering impersonal information. Search engines like Google make information findable with an efficiency we have never seen before. You can now get fairly trustworthy answers to trivial factual questions in seconds. With a little more time and skilled digging, you can get at least plausible answers to more many complex questions online. The Internet has become one of the greatest tools for both research and education that has ever been devised by human beings.

So far I doubt I have told you anything you didn’t already know. But I am not here to say how great the Internet is. I wanted simply to illustrate that the Internet does have these two purposes, and that the purposes are different—they are distinguishable.

How the Internet confuses communication and information

Next, let me introduce a certain problem. It might sound at first like a purely conceptual, abstract, philosophical problem, but let me assure you that it is actually a practical problem.

The problem is that, as purposes, communication and information are inherently confusable. They are very easy to mix up. In fact, I am sure some of you were confused earlier, when I was saying that there are these two purposes, communication and information. Aren’t those just the same thing, or two aspects of the same thing? After all, when people record information, they obviously intend to communicate something to other people. And when people communicate, they must convey some information. So information and communication go hand-in-hand.

Well, that is true, they do. But that doesn’t mean that one can’t draw a useful distinction fairly clearly. Here’s a way to think about the distinction. In 1950, a researcher would walk into a library and read volumes of information. If you wanted to communicate with someone, you might walk up to a librarian and ask a question. These actions—reading and talking—were very different. Information was something formal, edited, static, and contained in books. Communication was informal, unmediated, dynamic, and occurred in face-to-face conversation.

Still, I have to agree that communication and information are indeed very easy to confuse. And the Internet in particular confuses them deeply. What gives rise to the confusion is this. On the Internet, if you have a conversation, your communication becomes information for others. It is often saved indefinitely, and made searchable, so that others can benefit from it. What was for you a personal transaction becomes, for others, an information resource. This happens on mailing lists and Web forums. I myself have searched through the public archives of some mailing lists for answers to very specialized questions. I was using other people’s discussions as an information resource. So, should we say that a mailing list archive is communication, or is it information? Well, it is both.

This illustrates how the Internet confuses communication and information, but many other examples can be given. The Blogosphere has confused journalism, which used to be strictly an information function, with sharing with friends, which is a communication function. When you write a commentary about the news, or when you report about something you saw at a conference, you’re behaving like a journalist. You invite anyone and everyone to benefit from your news and opinion. Perhaps you don’t initially care who your readers are. But when you write about other blog posts, other people write about yours, and you invite comments on your blog, you’re communicating. Personalities then begin to matter, and who is talking can become more important to us than what is said. Information, as it were, begins to take a back seat.

Moreover, when news websites allow commenting on stories, this transforms what was once a relatively impersonal information resource into a lively discussion, full of colorful personalities. And, of course, online newspapers have added blogs of their own. I have often wondered whether there is a meaningful difference between a newspaper story, a blog by a journalist, and a well-written blog written by a non-journalist. That precisely illustrates what I mean. The Internet breaks down the distinction between information and communication—in this case, the distinction between journalism and conversation.

Why is the distinction between communication and information important?

I’ll explore more examples later, but now I want to return to my main argument. I say that the communication and information purposes of the Internet have become mixed up.

But—you might wonder—why is it so important that we distinguish communication and information, and treat them differently, as I’m suggesting? Is having a conversation about free trade, for example, really all that different from reading a news article online about free trade? To anyone who writes about the topic online, they certainly feel similar. The journalist seems like just another participant in a big conversation, and you are receiving his communication, and you could reply online if you wanted to.

I think the difference between information and communication is important because they have different purposes and therefore different standards of value. When we communicate, we want to interface with other living, active minds and dynamic personalities. The aim of communication, whatever else we might say about it, is genuine, beneficial engagement with other human beings. Communication in this sense is essential to such good things as socialization, friendship, romance, and business. That, of course, is why it is so popular.

Consider this: successful communication doesn’t have to be particularly informative. I can just use a smiley face or say “I totally agree!” and I might have added something to a conversation. By contrast, finding good information does not mean a significant communication between individuals has taken place. When we seek information, we are not trying to build a relationship. Rather, we want knowledge. The aim of information-seeking is reliable, relevant knowledge. This is associated with learning, scholarship, and simply keeping up with the latest developments in the news or in your field.

Good communication is very different from good information. Online communication is free and easy. There are rarely any editors to check every word you write, before you post it. That is not necessary, because these websites are not about creating information, they are about friendly, or at least interesting, communication. No editors are needed for that.

These communities, and blogs, and much else online, produce a huge amount of searchable content. But a lot of this content isn’t very useful as information. Indeed, it is very popular to complain about the low quality of information on the Internet. The Internet is full of junk, we say. But to say that the Internet is full of junk is to say that most conversations are completely useless to most other people. That’s obviously true, but it is irrelevant. Those who complain that the Internet is full of junk are ignoring the fact that the purpose of the Internet is as much communication as it is information.

Personally, I have no objection whatsoever to the communicative function of the Internet. In fact, it is one of my favorite things about the Internet. I have had fascinating conversations with people from around the world, made online friendships, and cultivated interests I share with others, and I could not possibly have done all this without the communicative medium that is the Internet.

But, as I will argue next, in making communication so convenient, we have made the Internet much less convenient as an information resource.

Communicative signal is informational noise

You are probably familiar with how the concept of the signal-to-noise ratio has been used to talk about the quality of online information and communication. A clear radio transmission is one that has high signal and low noise. Well, I’d like to propose that the Internet’s two purposes are like two signals: the communication signal and the information signal. The problem is that the two signal are sharing the same channel. So I now come to perhaps the most important point of this paper, which I will sum up in a slogan: communicative signal is informational noise. That is at least often the case.

Let me explain. The Internet’s two purposes are not merely confusable. In fact, we might say that the communicative function of the Internet has deeply changed and interfered with the informative function of the Internet. The Internet has become so vigorously communicative that it has become more difficult to get reliable and relevant information on the Internet.

I must admit that this claim is still very vague, and it might seem implausible, so let me clarify and support the claim further.

The basic idea is that what works well as communication does not work so well as information. What might seem to be weird and frustrating as information starts to make perfect sense when we think of it as communication.

Let me take a few examples—to begin with, Digg.com. In case you’re not familiar with it, it’s a website in which people submit links for everyone else in the community to rate by a simple “thumbs up” or “thumbs down.” This description makes it look like a straightforward information resource: here are Internet pages that many people find interesting, useful, amusing, or whatever. Anyone can create an account, and all votes are worth the same. It’s the wisdom of the crowd at work. That, I assume, is the methodology behind the website.

But only the most naïve would actually say that the news item that gets the most “Diggs” is the most important, most interesting, or most worthwhile. Being at the top of Digg.com means only one thing: popularity among Digg participants. I am sure most Digg users know that the front page of Digg.com is little more than the outcome of an elaborate game. It can be interesting, to be sure. But the point is that Digg is essentially a tool for communication and socialization masquerading as an information resource.

YouTube is another example. On its face, it looks like a broadcast medium. By allowing anyone to have a YouTube account, carefully recording the number of video views and giving everyone an equal vote, it looks like the wisdom of the crowd is harnessed. But the fact of the matter is that YouTube is mainly a communication medium. Its ratings represent little more than popularity, or the ability to play the YouTube game. When people make their own videos (as opposed to copying stuff from DVDs), they’re frequently conversational videos. They are trying to provoke thought, or get a laugh, or earn praise for their latest song. They want others to respond, and others do respond, by watching videos, rating videos, and leaving comments. I suspect that YouTube contributors are not interested, first and foremost, in building a useful resource for the world in general. They are glad, I am sure, that they are doing that too. But what YouTube contributors want above all is to be highly watched and highly rated, and in short a success within the YouTube community. This is evidence that they have been heard and understood—in short, that they have communicated successfully.

I could add examples, but I think you probably already believe that most of the best-known Web 2.0 websites are set up as media of communication and socialization—not primarily as impersonal information sources.

But what about Wikipedia and Google Search? These are two of the most-used websites online, and they seem to be more strictly information resources.

Well, yes and no. Even Wikipedia breaks down the difference between a communication medium and an information resource. There has been a debate, going back to the very first year of Wikipedia, about whether Wikipedia is first and foremost a content-production project or a community. You might want to say that it is both, of course. That is true, but the relevant question is whether Wikipedia’s requirements as a community are actually more or less important than its requirements as a project. For example, one might look at many Wikipedia articles and say, “These badly need the attention of a professional editor.” One might look at Wikipedia’s many libel scandals and say, “This community needs real people, not anonymous administrators, to take responsibility so that rules can be enforced.” Wikipedia’s answer to that is to say, “We are all editors. No expert or professional is going to be given any special rights. That is the nature of our community, and we are not going to change it.” The needs of Wikipedia’s community outweigh the common-sense requirements of Wikipedia as an information resource.

Please don’t misunderstand. I am not saying that Wikipedia is useless as an information resource. Of course it is extremely useful as an information resource. I am also not saying that it is merely a medium of collaborative communication. It clearly is very informational, and it is intended to be, as well.

Indeed, most users treat Wikipedia first and foremost as an information resource. But, and this is my point, for the Wikipedians themselves, it is much more than that: it is their collaborative communication, which has become extremely personal for them, and this is communication they care passionately about. The personal requirements of the Wikipedians have dampened much of the support for policy changes that would make Wikipedia much more valuable as an information resource.

Why do we settle for so much informational noise?

Let me step back and try to understand what is going on here. I say that Web 2.0 communities masquerade as information resources, but they are really little more than tools for communication and socialization. Or, in the case of Wikipedia, the community’s requirements overrule common-sense informational requirements. So, why do we allow this to happen?

Well, that’s very simple. People deeply enjoy and appreciate the fact that they can share their thoughts and productions without the intermediation of editors or anything else that might make their resources more useful as information resources. And why is it so important to so many people that there be no editors? Because editors are irrelevant and get in the way of communication.

The fact that Web 2.0 communities are set up for communication, more than as information resources, explains why they have adopted a certain set of policies. Consider some policies that Wikipedia, YouTube, MySpace, and the many smaller Web 2.0 websites have in common.

First, on these websites, anyone can participate anonymously. Not only that, but you can make as many accounts as you want. Second, when submissions are rated, anyone can vote, and votes are (at least initially, and in many systems always) counted equally. Third, if there is any authority or special rights in the system, it is always internally determined. Your authority to do something or other never depends on some external credentials or qualification. University degrees, for example, are worth nothing on YouTube.

The result is that, on a website like Wikipedia, a person is associated with one or more accounts, and the performance of the accounts against all other accounts is all that the system really cares about.

To Internet community participants, this seems very rational. A person is judged based on his words and creations alone, and on his behavior within the system. This seems meritocratic. People also sometimes persuade themselves, based on a misinterpretation of James Surowiecki’s book The Wisdom of Crowds, that ratings are an excellent indicator of quality.

But these systems are not especially meritocratic. It is not quality, but instead popularity and the ability to game the system that wins success in Web 2.0 communities. High ratings and high watch counts are obviously not excellent indicators of quality, for the simple reason that so much garbage rises to the top. There is no mystery why there is so much time-wasting content on the front page of YouTube, Digg.com, and many of the rest: it’s because the content is amusing, titillating, or outrageous. Being amusing, titillating, and outrageous is not a standard of good information, but it can be a sign of successful communication.

The less naïve participants, and of course the owners of these websites, know that Internet community ratings are largely a popularity contest or measure the ability to play the game. They don’t especially care that the websites do not highlight or highly rank the most important, relevant, or reliable information. The reason for this is perfectly clear: the purpose of these websites is, first and foremost, communication, socialization, and community-building. Building an information resource is just a very attractive side-benefit, but still only a side-benefit, of the main event of playing the game.

The attraction, in fact, is very similar to that of American Idol—I understand you have something similar called “Latin American Idol,” is that correct? Well, I have been known to watch American Idol. It is a television competition in which ordinary people compete to become the next Idol, who earns a record contract, not to mention the attention of tens of millions of television viewers. The singing on American Idol, especially in the early weeks, is often quite bad. But that is part of its entertainment value. We do not watch the program to be entertained with great singing—that is, of course, nice when it happens. Instead, we watch the program mainly because the drama of the competition is fascinating. Even though the quality of the singing is supposed to be what the program is about, in fact quality is secondary. The program’s attraction stems from the human element—from the fact that real people are putting themselves in front of a mass audience, and the audience can respond by voting for their favorites. The whole game is quite addictive, in a way not unlike the way Internet communities are addictive.

But let’s get back to the Internet. I want to suggest that the information resource most used online, Google Search itself, is also a popularity contest. Google’s PageRank technology is reputed to be very complex, and its details are secret. But the baseline methodology is well-known: Google ranks a web page more highly if it is linked to by other pages, which are themselves linked to by popular pages, and so forth. The assumption behind this ranking algorithm is somewhat plausible: the more that popular websites link to a given website, the more relevant and high-quality the website probably is. The fact that Google is as useful and dominant as it is shows that there is some validity to this assumption.

All that admitted, I want to make a simple point. Google Search is essentially a popularity contest, and frequently, the best and most relevant page is not even close to being a popular page. That is a straightforward failure. But just as annoying, perhaps, is the prevalence of false positives. I mean the pages that rank not because they are relevant or high-quality, but because they are popular or (even worse) because someone knows how to game the Google system.

Does this sound familiar? It should. I do not claim that Google is a medium of communication. Clearly, it is an information resource. But I want to point out that Google follows in the same policies of anonymity, egalitarianism, and merit determined internally through linkings and algorithms that machines can process. As far as we know, Google does not seed its rankings with data from experts. Its data is rarely edited at all. Google dutifully spiders all content without any prejudice of any sort, applies its algorithm, and delivers the results to us very efficiently.

I speculate—I can only speculate here—that Google does not edit its results much, for two reasons. First, I am sure that Google is deeply devoted the same values, values that favor a fair playing field for communication games that many Web 2.0 websites play. But, you might say, this is a little puzzling. Why doesn’t Google seek out ways to include the services of editors and experts, and improve its results? An even better idea, actually, would be to allow everyone to rate whatever websites they want, then publish their web ratings according to a standard syndication format, and then Google might use ratings from millions of people creatively to seed its results. In fairness to Google, it may do just this with the Google SearchWiki, which was launched last November. But as far as I know, SearchWiki does not aggregate search results; each individual can edit only the results that are displayed to that user.

So there is, I think, a second and more obvious reason that Google does not adjust its results with the help of editors or by aggregating syndicated ratings. Namely, its current, apparently impersonal search algorithm seems fair, and it is easy to sell it as fair. However much Google might be criticized because its results are not always the best, or because the results are gamable or influenced by blogs, at least it has the reputation of indeed being mostly fair, largely because PageRank is determined by features internal to the Internet itself—in other words, link data.

Google’s reputation for fairness is one of its most important assets. But why is such a reputation so important? Here I can finally return to the thread of my argument. Fairness is important to us because we want communication to be fair. In a certain way, the entire Internet is a communicative game. Eyeballs are the prize, and Google plays a sort of moderator or referee of the game. If that’s right, then we certainly want the referee to be fair, not to prefer one website over another simply because, for example, some expert happens to say the one is better. When it comes to conversations, fairness means equal consideration, equal time, an equal shot at impressing everyone in the room, so to speak. Communication per se is not the sort of thing over which editors should have any control, except sometimes to keep people polite.

The fact that Google has an impersonal search algorithm really means that it conceives of itself as a fair moderator of communication, not as a careful chooser of relevant, reliable content. And a lot of people are perfectly happy with this state of affairs.

Conclusion

In this paper I have developed an argument, and I hope I haven’t taken too long to explain it. I have argued that the Internet is devoted both to communication and information. I went on to say that communication and information are easily confused, and the Internet makes it even easier to confuse them, since what serves as mere communication for one person can be viewed later as useful information for another person. But what makes matters difficult is that we expect communication, and the websites that support online communication, to be as unconstrained and egalitarian as possible. As a result, however, the Internet serves rather well as a communication medium, as a means to socialize and build communities, but not nearly as well as an information resource.

I can imagine a reply to this, which would say: this is all a good thing. Information is about control. Communication is about freedom. Viva communication! Should our alleged betters—professors, top-ranked journalists, research foundations, and the like—enjoy more control over what we all see online, than the average person? The fact is that in the past, they have enjoyed such control. But the egalitarian policies of the Internet have largely removed their control. In the past, what those experts and editors have happened to say enjoyed a sort of status as impersonal information. But all information is personal. The Internet merely recognizes this fact when it treats allegedly impersonal information as personal communication.

This is the common analysis. But I think it is completely wrong.[1] First, the elites still exert control in many ways, and there is little reason to think the Internet will change this. Second, the radical egalitarianism of Internet policies does not disempower the elites so much as it disempowers intelligence, and empowers those with the time on their hands to create and enjoy popular opinion, and also those who care enough to game the system.

If more people were to emphasize the informative purpose of the Internet more, this would not empower elites; it would, rather, empower everyone who uses the Internet to learn and do research. We would have to spend less time sorting through the by-products of online communication, and could spend more time getting solid knowledge.

In fact, I think most people enjoy the Internet greatly as an information resource—at least as much as they enjoy it as a communication medium. But most of the people who create websites and Internet standards—the many people responsible for today’s Internet—have not had this distinction in mind. But I think it is very fruitful and interesting way to think about the Internet and its purposes, and—who knows?—perhaps it will inspire someone to think about how to improve the informational features of the Internet.

In fact, if my fondest hope for this paper were to come true, it would be that those building the Internet would begin to think of it a little bit more as a serious information resource, and a little bit less as just a fun medium of communication.

[1] As I have argued in a recent paper: “The Future of Expertise after Wikipedia,” Episteme (2009).


Why study higher mathematics and other stuff most people don't use in everyday life?

This video was posted in a Facebook group of mine here:

I find it ironic that some of the most listened-to speakers about education explain that the cure to our educational ills is to point out that education is unnecessary. I call this educational anti-intellectualism. Here's another representative sample and another.

It is possible to make the argument, "X isn't going to be necessary for most students in life, therefore X should not be taught," for almost everything that is taught beyond the sixth grade or so. After that, we should be taught "critical thinking" and vague "analytical abilities" and "reading comprehension" and other such claptrap; that seems to be the natural consequence of this commentator's thinking, and sadly, he is not alone.

The fact that educated people like this teacher, and all the people who approve of this stuff, cannot answer the question is very disappointing. It's not surprising, perhaps, because it's philosophy and philosophy is very hard. Moreover, there are a variety of sort-of-right answers that subtly get things wrong and might end up doing more damage than good.

In the latter category I might want to place E.D. Hirsch, Jr., one of the most prominent education traditionalists alive. (He just published a book I got today called Why Knowledge Matters, and he might have updated his views on this; I'll find out soon.) Hirsch's argument is that we ought to learn classics and, essentially, get a liberal arts education, because this is the knowledge we use to interact with other educated adults in our culture. It is "cultural literacy" and "cultural capital" and this is something we desperately need to thrive as individuals and as a civilization.

That's all true, I think. If Hirsch made the argument as, essentially a defense of Western (or just advanced) civilization—that we need to educate people in Western civilization if we are to perpetuate it—then I'd be fully on board. But Hirsch as I understand him appeals particularly to our individual desire to be a part of the elite, to get ahead, to be able to lord it over our less-educated citizens. This is a very bad argument that won't convince many people. If Hirsch or anyone makes it, I would put it in the category of arguing for the right conclusion for the wrong reason.

The argument I'd give to this math teacher is the same I'd give to someone who says we shouldn't memorize history facts or read boring, classic literature or learn the details of science or what have you. Of course you don't need that stuff to get through life. Most people are as dumb as a box of rocks when it comes to academic stuff (yes, in all countries; some are worse than others).

The reason you get an education, and study stuff like higher math, is more along the following lines. Education trains the mind and thereby liberates us from natural prejudice and stupidity. This is the proper work for human beings because we are rational creatures. We are honing the tool that comes more naturally to us than to any other animal. One must realize, as people like this educated fool and so many others seem not to, that education, such as math education, is not merely a tool in the sense of "abilities." The content, or what is known, is a deeply important part of the tool; in fact, as Hirsch does argue correctly and convincingly, any "analytical abilities" brought to a text will be very poor without relevant subject knowledge. If you want an analogy, it is a poor one to say that a course in logic sharpens your wit, to say you want to have sharp wits, and therefore you should study "critical thinking"; the heft or substance of your wit's ax is all the rest of the knowledge behind the cutting edge. Getting an A in a logic class (a course I taught many times) without knowledge of math, science, history, literature, etc., gives you about as much heft and effectiveness as a sharp-edged piece of paper: capable of paper-cuts.

The core of the argument for knowledge is that academic knowledge forms a sort of deeply interconnected system, and the more deeply and broadly that we understand this system, the more capable we are in every bit of life. This is true of us as individuals and also as a society or civilization. It is completely and literally true that the fantastic structure of modern civilization as we know it, all of the historically unprecedented developments we have seen, is a direct outgrowth of the deep commitment of our society's leaders—since the Enlightenment—to education in this system.

The system I refer to is deeply connected, but that doesn't mean it isn't also loosely connected in the sense that one can learn bits here and there and benefit somewhat. That's absolutely true. This is why it's possible for the math teacher to say, "Well, you don't really need to know higher math in order to live life." Some people are geniuses about literature but don't remember anything about any math they learned beyond the sixth grade.

But as everybody with higher education knows, in fact it is absolutely necessary to learn higher math if you are going to learn higher science—both the hard sciences and the social sciences, both of which require heavy calculation—and deal intelligently with statistics and probabilities, as is necessary in politics, or the financial part of business, or some of programming, etc.

This is because the "deep structure" of reality is mathematical. To declare that "you don't really need to know it" is to declare that you don't need to know the deep structure of reality. Sure, of course you don't. The birds of the air and the fish of the sea don't. But do you want our children to be more like them or more like fully rational, aware, human creatures?