On educational anti-intellectualism: a reply to Steve Wheeler

Suppose a student arrived at the age of 18 not knowing anything significant about World War II or almost any other war, barely able to do arithmetic, ignorant of Sophocles, Shakespeare, Dickens, and most other great writers, and wholly unschooled in the hard sciences (apart from some experiments and projects which made a few random facts stick).  Now, we can charitably concede that such a person could know his way around a computer, the Internet, and other technology very well.  He might have any number of vocational skills and have a job.  We can also imagine that such a person even writes and speaks reasonably well (although this seems unlikely).  Finally, we can imagine such a person being happy with himself and his "education."  This is all easy to imagine, because such students are being minted with appalling frequency these days in the U.S. and (to a lesser extent) the U.K.

Let us try to put aside our differences about educational philosophy for a moment; surely we can agree that, objectively speaking, this student is ignorant. He lacks an adequate amount of--to employ some jargon used by epistemologists, and by Steve Wheeler in a recent blog post that I responded to--"declarative knowledge."

So next, let's suppose that an education professor (whether this corresponds to Wheeler remains to be discussed) were to maintain that (1) our schools should be teaching even less declarative knowledge than they have been, (2) such traditional subjects as literature, history, geography, science, and French had become unimportant, or at least much less important, particularly now that Google supplies instant answers, and (3) we should not teach individual subjects such as those just listed, but instead mix various subjects together in projects that display how holistic and interrelated the world is.  Now, whatever else he might believe or say, it is reasonable to conclude that these recommendations, if followed by schools, would contribute to ignorance of the sort described above.

Now, I do not claim to have an interesting theory of anti-intellectualism.  But I do think that we can identify a theorist as anti-intellectual if his theories, when implemented on a large scale, would obviously and directly lead to widespread ignorance.  This isn't a definition; it's merely a sufficient condition.  (Forgive me for not refining this formula further, but I think it will do well enough.)  I could say more plainly that such a theorist supports ignorance over knowledge, but of course most people will deny supporting that.  So--to use some other philosophical jargon--I only ascribe the view to him de re, not de dicto.

This is not necessarily "anti-intellectual" in some more derivative senses, which have a lot of play in the media today.  For example, an anti-intellectual according to my test might also be an academic and staunchly in support of universities and academic work; he might support a technocratic government of experts; he might support science against faith-based criticisms.  But these are, I maintain, derivative senses of "anti-intellectual," because universities, experts, and science are each bastions of knowledge. Knowledge is the main thing.  So in a more basic sense, to be intellectual is to be a devoted adherent of knowledge, and particularly of abstract or general knowledge.  I don't intend this as a theory of anti-intellectualism, but more of a general, rough sketch.

Someone who recommends (or whose theories entail) that students should gain much less knowledge than they otherwise would seems to me a better example of an anti-intellectual than, say, a creationist or a climate change denier.  This is because the ignorance permitted is not limited to a particular topic, but is thoroughgoing--and deliberate.  The (perhaps fictional) education professor I described earlier is opposed to students getting more declarative knowledge, per se, than they get right now.  Whatever their problems, you can't say that of the creationist or the climate change denier; at worst, their positions make them hostile to particular examples of knowledge, not to knowledge per se. Which do you think is worse?

In his recent post, Steve Wheeler defends himself against my charge of "anti-intellectualism."  Now, I hope it's very clear that my posts are not only about Steve Wheeler.  He's just one example of a whole class of education theorist.  He has merely stated the position of educational anti-intellectualism with admirable clarity and brevity, making it especially easy for me identify and dissect the phenomenon.  Wheeler cites another Brit, Sir Ken Robinson, as someone who shares his views.  I'm sure he will not be surprised to learn that I have, in fact, responded similarly to Robinson (though I forebore to apply the label "anti-intellectual" in that case--I came close).  I also responded to another theorist Wheeler mentioned, John Seely-Brown, in this paper.

In his defense, Wheeler archly, with great irony, claims to be "gratified that someone with such a standing in the academic community had taken the time to read my post and respond so comprehensively" and "My list of peer reviewed publications and the frequency of my invited speeches around the world will not compare with his."  In case you have any doubt, let's just say that I am pretty sure Prof. Wheeler took the time to look at my site and gauge my meager academic and speaking credentials.  That would be the first thing that most academics would do.  So of course Wheeler knows that, in fact, I don't have much standing in the academic community at all; I have very few peer reviewed publications, and my speeches, most of which were not for an academic audience, are not as "frequent" as his.  He has me hopelessly outclassed in these areas, and he knows it.  He's the academic and the intellectual, and I'm the outsider--or so he seems to convey.

But his deliberate irony backfires, I find.  It is very easy for a distinguished academic, like Wheeler, to be hostile to knowledge, or science, or reason, or the prerogatives of experts.  Otherwise perfectly "intellectual" people have been justly called "anti-intellectual" because of their hostility to the products, power, or institutions of the mind.  "Anti-intellectual intellectual" is no more a contradiction than "anti-Semitic Jew" or "anti-American American."  So this defense is incorrect: "It seems a contradiction that he can view me as a 'serious theorist' and then spend the majority of his post trying to convince his readers that I am 'anti-intellectual'.  Surely the two cannot be compatible?"  Surely they can--and in our twisted and ironic age, all too often are.  So, while I have respect for Wheeler's work, it doesn't defend him from charges of anti-intellectualism.  He would conscientiously, on principle, deny our students just the sort of knowledge that he benefited from in his life and career--though he questions whether he needed them later in life, and says that his schooling "didn't make that much sense to me," and questions the worth of various subjects and facts that a liberally educated person, such as he himself, might pick up.

No, pointing out that he is a distinguished academic won't shield Wheeler from accusations of anti-intellectualism.  Only a frontal reply to my argument would do that.  Does his recent post contain such a reply?

Not exactly.  I am not going to do another line-by-line reply, as tempting as that might be.  He does deny that he wants to remove "all knowledge...from curricula."  I didn't think so, and my argument doesn't attack such a straw man.

In place of the relatively clear attack on "declarative knowledge," Wheeler's more cautious restatement resorts to a vague, contentless call for reform:

In my post I suggested that a possible way forward would require a reappraisal of the current curricula, with more emphasis on competencies and literacies. I wish to make something clear: My remark that some knowledge was susceptible to obsolescence was not a call for all knowledge to be removed from curricula - that would indeed be ridiculous. I am not attacking knowledge, as Sanger asserts. Rather, I am calling for schools to re-examine the content of curricula and to find ways to situate this knowledge within more open, relevant and dynamic learning contexts. I am also calling for more of an emphasis on the development of skills that will prepare children to cope better in uncertain futures.

He doesn't give many details here or later, nor does he really retract anything in particular from his earlier post.  He does regret using "poor illustrations and analogies to underpin this call," but only because it created a rhetorical opening for me.  As I see it, he wants us to believe that he were merely calling for schools to add a little more discussion and reflection into an otherwise really hardcore "facts-only" curriculum.

But it would be frankly ridiculous to characterize the American educational system, at least, this way.  Many teachers here are already deeply committed to the project method and skills education.  Students can get through an entire 13 years without reading many classics at all.  Indeed, just re-read the first paragraph of this post.  That (at least the first part) describes a lot of students.  Such poor results are no doubt partly because students don't study enough, and their parents aren't committed to school enough to get their children committed.  But it's also partly because schools simply don't teach enough, period.  I had an "honors and AP" sort of public school education in an excellent district (Anchorage, Alaska in the 1980s) and I didn't learn nearly as much as I could or should have.  This is why I'll be homeschooling both of my sons (my first is in kindergarten at home)--because standards have declined even farther from where they were when I was a student.

Schools do, clearly, require a huge amount of work. I think we can agree there.  But let's not confuse work with sound training in the basics and the liberal arts.  There's altogether too much busywork, worksheets, low-priority but time-consuming projects, group reports, etc., and not nearly enough reading of good books and reflective discussion and writing about it.  We could be requiring less but using more high-impact activities (like reading the classics and letting students go at their own pace through math texts, self-selected from a list proven to raise test scores), and students would learn more.

When Wheeler cites Ken Robinson in criticism of "old industrialised models" of education, calls for "conversation" and "self discovery," and approvingly quotes Richard Gerver in support of a "personal and unpredictive journey," I can stand up and cheer too.  I think Wheeler might be surprised to learn this.  On some issues, we might not be so far apart.  I'm an advocate of home schooling, in which such things are actually possible.  (As I said in my analysis of a Robinson speech, effectively opposing the "industrialized" or "factory" model of education really requires something like homeschooling en masse, which does not seem possible as long as control of education is centralized.)  But we still study subjects. Our studies still have coherence and benefit from our studying conceptually related topics near to the same time.  We still cover the traditional subjects like history and science--in far more detail than I ever did at this age.  It's just that we are able to take detours, choose the books we like, drop the ones we don't, etc.  The point is that you don't have to throw out the baby (knowledge) with the bathwater (regimented, unpersonalized school curricula).

So much for Wheeler's defense.

The question in my mind is whether his explanation has made his commitment to (1)-(3) any less clear.  Should our schools be teaching even less declarative knowledge than they have been?  So it seems, though now he regrets listing individual subjects and facts.  (Maybe fear of being called out as I've done with Wheeler explains why education professors often write so vaguely.)  He didn't mention--not to support or retract--all the business about declarative knowledge being trivial to access and going out of date anyway.  No retraction of the line that the availability of instant facts via Google make study of various academic subjects pointless.  Should we avoid teaching individual subjects, in favor of (much less efficient) projects that display how holistic and interrelated the world is?  He defended that in his latest.

Well then, my conclusion still stands: someone who believes (1)-(3) is, admit it or not, advocating for even more ignorance than we suffer from today.  It seems that Wheeler supports (1)-(3), and that looks pretty anti-intellectual to me.

Applying "anti-intellectual" to Wheeler's views is not a mere rhetorical "tactic," as he calls it.  Harsh and possibly impolite it might be, but it names an important feature of his views.  If I wanted to, I could politely agree to drop the epithet.  Then I would simply say that Wheeler's recommendations would have us, deliberately, on purpose, make students more ignorant and less knowledgeable.  Would that really be less damning than the epithet "anti-intellectual"?

An example of educational anti-intellectualism

I've got to stop blogging quite so much, but I couldn't let this pass without comment.

One would expect Steve Wheeler, Associate Professor of learning technology at Plymouth University in England, to be plugged into and more or less represent the latest academic trends in education technology.  If so, I'm a bit scared.

I came across Prof. Wheeler's blog post from yesterday, "Content as curriculum?" If I had wanted to create a parody of both kinds of anti-intellectualism I've mentioned recently--among geeks and among educationists--I couldn't have invented anything better. Wheeler hits many of the highlights, repeating the usual logical howlers as if they were somehow deeply persuasive. While I've already debunked a lot of this elsewhere, I thought it would be instructive to see that I have not, in fact, exaggerated in my characterization of the anti-intellectualism of some educationists.

Wheeler's post is so interesting and provocative, I'm going to go through line-by-line.

I think it's about time we reconsidered the way curricula in schools are presented. The tired, just in case model of curriculum just doesn't make sense anymore. Content is still very much king in schools, because 'content as curriculum' is easy and cost effective to deliver, and that is what most governments require and impose.

"Curriculum" is a very slippery term.  Wheeler here appears to mean "whatever can be taught." Later, he later brings out a distinction, familiar to philosophers, between declarative knowledge (knowledge that, "I know that 2+2=4") and procedural knowledge (knowledge how, "I know how to ride a bicycle"). Wheeler's main thesis seems to be that schools should concentrate on teaching procedural knowledge much more and declarative knowledge even less. So we can unpack the title of the blog post, "Content as curriculum?": he is skeptical that "content," or stuff that students might gain declarative knowledge of, should be the focus of the "curriculum," or what is taught.  The curriculum, he seems to maintain, should be practice, skills--not content.

In other words, if you strip away the edu-speak, Wheeler is saying that students should be taught a lot less declarative knowledge. Since this is what we ordinarily mean by "knowledge," we can put it even more simply: Wheeler is opposed to teachers imparting knowledge.

Now, this might sound like a ridiculous oversimplification of Wheeler's views. But if so, that's not my fault, it's Wheeler's. If you read his blog post, you'll see that I'm not being uncharitable in my interpretation. I'm simply explaining what he means. If there were any doubts or question that he really means this, he makes it all too clear in the next paragraphs, as we'll see.

But most teachers will tell you it's not the best approach.

I'm sure that teachers would be surprised to learn that their peers believe it's "not the best approach" to use "content," or what can be learned as declarative knowledge, as the "curriculum." All I can say is, I hope he's wrong. To be sure, there are some teachers out there who have great contempt for books and what I would call substantial learning. But surely they are still a minority.

When I went to school I was required to attend classes in mathematics, English language and literature, science (physics, biology, chemistry), history, geography, music, art, Religious Education, craft and design, home economics, German and French - all just in case I might need them later in life. With the exception of a few subjects, my schooling didn't make that much sense to me.

...and it appears that how these traditional subjects are useful to him "later in life" still doesn't make sense to him.  I'll enlighten him below.

Occasionally I hear someone saying "I'm glad I took Latin at school", and then arguing that it helped them to discover the name of a fish they caught whilst out angling on holiday. Well, knowing that thalassoma bifasciatum is a blue-headed wrasse may be wonderful for one's self esteem. It may impress your friends during a pub quiz, but it won't get you a job.... and was it really worth all those hours learning how to conjugate amo, amas, amat simply to be able to one day identify a strange fish, when all you need to do in the digital mobile age is Google it?

Here, finally, we get the hint of an argument: the reason that Latin, and presumably all those other subjects, are not "needed" in the curriculum is that we can Google that information. Actual knowledge of those subjects is not needed--because we've got Google.

But does he really mean that all those subjects he listed, and Latin, are not needed?

The question is, how much do children now need to learn in school that is knowledge based? Do children really need to know what a phrasal verb is, or that William Shakespeare died in 1616 when what they really need to be able to do is write a coherent and convincing job application or construct a relevant CV? We call this type of learning declarative knowledge, because it is 'knowing that' - in other words, the learning of facts. Yet, in a post-modernist world where all knowledge has become increasingly mutable and open to challenge, facts go quickly out of date. I was taught in school that there are nine planets orbiting the sun. Today it appears that Pluto is no longer a planet (but for me he will always be a cartoon dog). Is it Myanmar or Burma? I was told by my geography teacher it was Burma. Then she was right, now she is wrong. Just when did Mao Tse-tung change his name to Mao Zedong? And is the atom still the smallest object known to humankind? No. Now we have something called quantum foam. Apparently it's great for holding the universe together but pretty useless in a wet shave. You see, facts are changing all the time, and very little appears to remain concrete. So why are teachers wasting their own time, and that of the kids, teaching them facts which in a few years time may be utterly out of date?

Yep. He means it.

Now, look, I don't know how many times I need to repeat the arguments against this sort of nonsense--I think I did a pretty good job in "Individual Knowledge in the Internet Age"--but it won't hurt to rehearse a few of them briefly:

1. Much of the declarative knowledge that matters, and which requires time and energy to learn, is not of the sort that can be gained by looking it up in Google.  You can read some quick analysis of the causes of the Great Depression, but you won't really know them until you've studied the subject.

2. Most accepted knowledge doesn't change, even over a lifetime.  Fine, Pluto's no longer a planet.  The others are.  99% of what we knew about the solar system 50 years ago has not been disconfirmed.  Most new knowledge adds detail; it does not render old knowledge useless.  (Besides, the professor would not be able to cite this as an example if he had not learned that Pluto was a planet; he couldn't be an articulate, plugged-in thinker without his "useless" declarative knowledge, which he could count on other educated people sharing.)

3. Understanding an answer usually requires substantial background knowledge.  Suppose I want to know when Shakespeare died, and I find out that it is 1616.  But suppose I haven't memorized any dates.  Then this information, "1616," means absolutely nothing whatsoever to me.  It is, at best, a meaningless piece of trivia to me.  Only if I have studied enough history, and yes, memorized enough dates, will 1616 begin to have some significance to me.  I wonder if Wheeler thinks the date doesn't matter, period, because Shakespeare doesn't matter, period. After all, if that date isn't important, is any important?

4. Most vocabulary is learned in context of copious reading.  If schools start teaching "procedural knowledge" instead of "declarative knowledge," then the vocabulary and conceptual stockpile of students will be so poor that they can't understand the answers they Google.  (They certainly wouldn't be able to understand this blog post.)

5. Finally, declarative knowledge is its own reward.  Understanding the universe is a joy in itself, one of the deepest and most important available to us.   You are a cretin if this point means nothing to you.

Mainly what I think is interesting here is that this is a professor of education, and he is espousing flat-out, pure, unadulterated anti-intellectualism. An educator opposed to teaching knowledge--it's like a chemist opposed to chemicals--a philosopher opposed to thinking. Beyond the sheer madness of the thing, just look at how simple-minded the argument is, and from what appears to be a rather distinguished academic. I actually find this rather sobering and alarming, as I said. It's one thing to encounter such sentiments in academic jargon, with sensible hedging and qualifications, or made by callow, unreflective students; it's another to encounter them in a heartfelt blog post in which, clearly, a serious theorist is airing some of his deeply-held views in forceful language.

Wheeler goes on:

Should we not instead be maximising school contact time by teaching skills, competencies, literacies? After all, it is the ability to work in a team, problem solve on the fly, and apply creative solutions that will be the common currency in the world of future work. Being able to think critically and create a professional network will be the core competencies of the 21st Century knowledge worker. Knowing how - or procedural knowledge - will be a greater asset for most young people. You see, the world of work is in constant change, and that change is accelerating.

There was never a time in history in which "ability to work in a team, problem solve on the fly, and apply creative solutions" were not significant advantages in the work world.  These are not new features.

Another point that seems to be completely lost on those who make such sophomoric arguments as the above is that having a deep well of conceptual understanding is itself absolutely necessary to the ability to "work in a team, problem solve on the fly, and apply creative solutions."  It's even more important for the ability to "think critically."  This is why philosophy graduates famously excel in the business world.  They are trained to think through problems.  Difficult problem solving requires abstract thinking, and the only way to train a person to think effectively and abstractly is by tackling such difficult academic subjects as science, history, classic literature, and philosophy.  Besides, the skills and knowledge learned in these subjects frequent provide a needed edge in fields that require a mathematician's accuracy, a historian's eye to background, a litterateur's or psychologist's grasp of human nature, or a philosopher's clarity of thought.

Besides, not only has declarative knowledge mostly not changed, procedural knowledge changes much faster--which is probably part of the reason it was not taught in schools, for a long time, apart from a few classes.  The specific skills for the work world were, and largely still are, learned on the job.  So let's see, which would have been better for me to learn back in 1985, when I was 17: all the ins and outs of WordPerfect and BASIC, or U.S. History?  There should be no question at all: what I learned about history will remain more or less the same, subject to a few corrections; skills in WordPerfect and BASIC are not longer needed.

My 16 year old son has just embarked on training to become a games designer. If, when I was his age I had told my careers teacher that I wanted to be a games designer, he would have asked me whether I wanted to make cricket bats or footballs. Jobs are appearing that didn't exist even a year or two ago. Other jobs that people expected to be in for life are disappearing or gone forever. Ask the gas mantel fitters or VHS repair technicians. Ask the tin miners, the lamplighters or the typewriter repair people. Er, sorry you can't ask them. They don't exist anymore.

I don't quite understand Wheeler's point here.  His 16-year-old son is training to become a games designer, at an age when Wheeler and I were spending our time learning "mathematics, English language and literature, science (physics, biology, chemistry), history, geography, music, art," etc.  By contrast, his son's early training in games design is supposed to help him in twenty or thirty years--when games will be exactly the same as they are today?  I thought the point was that things like game design change very fast.  Well, I don't know his circumstances, but my guess is that his son would be better off learning the more abstract, relatively unchanging kind of knowledge, providing a foundation, or scaffolding, that will make it easier to learn particular, changeable things later, as well as communicate more effectively with other well-educated people.  Here I hint at another argument for declarative knowledge, E.D. Hirsch's: it provides us an absolutely essential common culture, which makes it possible for Wheeler and I to understand each other, at least as well as we do.  Ask entrepreneurs you know and they'll tell you: the ability to communicate quickly, precisely, and in general effectively is a deeply important ability to have in a employee.  You don't often gain that ability on the job; you develop it, or not, by studying various modes of communication.

Why do some teachers still provide children with answers when all the answers are out there on the Web? Contemporary pedagogy is only effective if there is a clear understanding of the power and efficacy of the tools that are available. Shakespeare may well have died in 1616, but surely anyone can look this up on Wikipedia if and when they need to find out for themselves? ...

Well, here's another puzzling thing to say.  Teachers don't "provide children with answers."  If they are doing their jobs properly, they are getting children to learn and understand the answers (and the questions).  A teacher is not a search engine.  Moreover, it's unconscionable that a trainer of teachers would pretend that what teachers do can be done by a search engine.

Get them using the digital tools they are familiar with to go find the knowledge they are unfamiliar with. After all, these are the tools they carry around with them all the time, and these are the tools they will be using when they enter the world of work. And these are the tools that will mould them into independent learners in preparation for challenging times ahead.

"Digital tools" will "mould them into independent learners"?  I've heard a lot of things about digital tools, but never that they would make an independent learner out of a student.

If you want to make an independent learner, you have to get them, at a minimum, interested in the world around them--in all of its glorious aspects, preferably, including natural, social, political, mathematical, geographical, philosophical, and so forth.  If they aren't interested in those rich, fascinating aspects of the world--and why should they be, if their teachers dismiss such knowledge as "useless"?--they'll no doubt only be interested in the easiest-to-digest entertainment pablum.  Why think they'll get very interested in how to build software if they haven't been led to be curious about the facts about how software, or the world generally, works?  Surely Wheeler's son has learned some facts that have provided essential scaffolding for his interest in computer programming.

I don't think digital tools, and the mere ability to use them, will make curious, independent learners out of people all by themselves.  Most of all, we need exposure to the content of thought, concerning the natural world and our place in it; then we need to be given the freedom to seek out the answers on our own time.  I don't support the idea of spoon-feeding information--that's what poor teachers do, as well as search engines, come to think of it.  I think students should be challenged to read deeply and reflect on what they read.  That's the tried and true way to make a curious, independent learner.

We need to move with the times, and many schools are still lagging woefully behind the current needs of society. Why do we compartmentalise our subjects in silos? When will we begin to realise that all subjects have overlaps and commonalities, and children need to understand these overlaps to obtain a clear and full picture of their world. Without holistic forms of education, no-one is going to make the link between science and maths, or understand how art or music have influenced history. Some schools such as Albany Senior High School in Auckland are already breaking down the silos and supporting learning spaces where students can switch quickly between 'subjects' across the curriculum. Other schools are beginning to realise that ICT is not a subject and shouldn't be taught, but is best placed as embedded across the entire curriculum.

To answer Wheeler's no doubt rhetorical question, we study various special subjects independently for the reason that there is no other efficient way to learn those subjects.  To be sure, it makes sense to learn the humanities all together, in historical order.  Other subjects might, perhaps, be usefully combined.  But if you make a big melange of it, you'll find that the subjects simply can't be mastered nearly as quickly.  You have to spend a significant amount of time on a subject before the neurons really start firing.  If you're always skipping around from this to that, on the basis of rough analogies and not more substantive conceptual relationships (as become clear when you study anything systematically), you never get the extremely significant advantages that accrue from studying things that are conceptually closely related at about the same time.

But if, like Wheeler, you don't think that declarative knowledge of academic subjects is especially important, and you can't grasp what importance abstract conceptual dependencies might have for enhancing understanding, communication, and curiosity, then, no.  You might not think there's any point in focusing on particular subjects.

So, no, "moving with the times" does not require that our children arrive at adulthood as ignoramuses.

It's about time we all woke up and realised that the world around us is changing, and schools need to change too. After all, the school still remains the first and most important place to train and prepare young people for work. If we don't get it right in school, we are storing up huge problems for the future. Education is not life and death. It's much more important than that.

Didn't Wheeler ever learn plus ça change, plus c'est la même chose in his French class?  The pace of change has been increasing, no doubt.  But the world has been changing fairly quickly for the last 150 years, and one of the classic arguments for liberal arts education--something that I can't imagine Wheeler actually endorsing, given what he's written--is precisely that the liberal arts enable us to deal with such changes by giving us a solid foundation, a mature (not to say perfect or unchanging) comprehension of the natural and social world.  They also give us the ability to think and communicate more deeply and accurately about new and changing things.

A student well educated in the liberal arts, who has "memorized"--and really understood--a boatload of "mere facts," will be far better prepared to meet the changes of the coming century than someone who is well trained in the use of digital technology to talk at random about things of which he has been left tragically ignorant.

A short manifesto for schools: time to focus on knowledge

Ever since I was an elementary school student myself, I have been chronically disappointed with the American education establishment. Don't get me wrong--I get along fine with most of the educators I encounter, who are good people and full of all sorts of good ideas and real competence. But I also believe a sickness pervades the entire American system of education, the sickness of anti-intellectualism.

I read plenty of blogs, tweets, and articles on education and ed tech, as well as the occasional book, from all sorts of cutting-edge teachers, administrators, and education theorists. They are all abuzz about the latest online educational resources, which I love and use (and develop) too. But whenever the subject of learning facts or substantially increasing levels of subject knowledge, and--especially--tests of such things comes up, I seem to hear nothing but boos and hisses. This might be surprising, because, after all, what are those educational resources for if not to increase levels of subject knowledge? It seems an exception is made for technology.

But to focus attention on ignorance among students, poor test results, etc., apparently means caring too much about "direct instruction" and a lot of other betes noire of the education establishment. If I talk much about raising standards or returning the focus to knowledge, I threaten to reject "student-centered" education, the project method, "useful" (read: vocational) knowledge, and authentic assessments, and replace such allegedly good things with "drill and kill," learning "trivia," boring textbooks, and in general a return to soul-killing, dusty old methods discarded long ago and rightly so. What I rarely encounter from the education establishment--though critical outsiders like myself talk endlessly about it--is evidence of an earnest concern about, quite simply, how much students learn.

Enter this Atlantic article about a recent study of the factors correlated with test success, by Harvard researchers Will Dobbie and Roland Fryer--not education professors, but economists who know a thing or two about research methods. Dobbie and Fryer discovered, unsurprisingly, that higher student scores are correlated with a "relentless focus on academic goals." Such a focus entails "frequent teacher feedback, the use of data to guide instruction, high-dosage tutoring, increased instructional time, and high expectations." The factors not correlated with school effectiveness, by contrast, included "class size, per pupil expenditure, the fraction of teachers with no certification, and the fraction of teachers with an advanced degree." My hat is off to these researchers for reminding us, and giving new supporting data, for what those of us outside of the education establishment already knew: a culture of commitment to academic success is the main thing that matters to academic success. But we may confidently predict that the education establishment will dismiss this study, as it has done so many others that came to similar conclusions.

Reading about the study inspired a few thoughts. First, it is indeed the culture of a school that determines whether its students are successful. This claim makes sense. If the goal of schools were to inculcate knowledge in students--success at which is might be measured by the study's "credible estimates of each school's effectiveness"--then it is simple rationality, and requires no special studies, to conclude that the school's staff should have a "relentless focus on academic goals."

If you want to make schools better, it's important that they be absolutely focused on academic goals, which is to say, on knowledge. Excess resources won't buy such a focus (and the study indicates that this might be inversely correlated with success). Class size doesn't matter. Focus on "self-esteem" doesn't help. What is needed more than anything is a pervasive, institutional commitment to knowledge as a goal.

Sadly, increasing student knowledge is not the overriding goal of American schools.

I wish I had time to write a book for a popular audience of parents, teachers, and older students, defending knowledge (not "critical thinking," which requires knowledge; not self-esteem; not social training; not high-tech training) as the most important goal of education. I'd also like to make the case that the education establishment has been really, truly, and in fact anti-intellectual and "anti-knowledge." I'm merely asserting this perhaps startling claim in this post--this isn't the place to make the case. But those who are familiar with intellectualist critiques of American public schools will understand.

If you really want to know why so many kids in the United States do poorly on exams and are basically know-nothings who turn into know-nothing adults, I'll tell you. It's because many parents, many teachers, the broader education establishment, and especially the popular culture of "cool" which guides children's development after a certain age are simply anti-intellectual. Europeans and especially Asians do not have such an albatross around their necks. They actually admire people who are knowledgeable, instead of calling them nerds, and (later in life) dismissing their knowledge as "useless" and dismissing them because of their less-finely-tuned social skills, and (after they gain some real-world status) envying them and labeling them "elitist."

It's been widely believed and often admitted for a long time that the United States has a long, nasty streak of anti-intellectualism. Democrats love to bash Republicans on this point. (And recently I bashed geek anti-intellectualism as well.) But anti-intellectualism in schools? This is apt to make many Democrats, and the more establishment sort of Republican, pretty uncomfortable. Still, it's true. This broad societal illness has kept our schools underperforming not just recently, but for generations.

The common complaints about standardized testing are misplaced. If schools were filling their charges' minds adequately with academic knowledge and skills, they would welcome standardized tests and would not have to spend any extra time on preparing students for them. The focus on the project method is misplaced, too. Projects and experiments are important as far as they go, but it is simply inefficient--if the goal is to develop student knowledge--to make projects the centerpiece of pedagogy.

Finally, the generations-long the flight from books, especially well-written books, is a travesty. Books are where the knowledge is. If you want students to know a lot, have them read a lot of non-fiction books. They will inevitably become very knowledgeable. If you make sure that the books are well-written--not boring library fodder like so many geography books, for example--and hand-picked by students, they will more likely enjoy their reading. Having read probably a few thousand children's books to my son in the last five years, I can assure you that there is no shortage of excellent children's books on most subjects. We're in a sort of golden age of children's books--never before has there been such a tremendous variety of offerings so readily available.

Principals and teachers need to lead the way. They need to convey to their students and their students' parents that their schools are all about getting knowledge. "When you leave our school, you will know a lot," educators should be able to tell their students--honestly. "But we will expect you to pay attention and read a lot of books. We will expect a lot of you. Learning at this school won't be easy, but the effort will definitely be worth it. It will open up your eyes to a world that is far larger and deeper than you knew. The knowledge you will gain will make you, in a way, a bigger person, more connected to everything all around you, and better prepared to make the world a better place."

Finally, if schools don't throw off this anti-intellectualism, which has become positively stodgy and stale, and which is so contrary to their mission, they can expect to encounter more and more competition in the form of charter schools, school vouchers, homeschooling, and now virtual schools. If parents who really care about learning run toward new educational systems that have a better chance of actually educating their children, who can blame them?

Reply to Nathan Jurgenson on anti-intellectualism

Thanks to Nathan Jurgenson for a thoughtful critique of "Is there a new geek anti-intellectualism?"  I wish I had more time to respond, especially since it is so earnestly intellectual itself.  The following will have to do.

Jurgenson provides a definition (which he says is based on Hofstadter's discussion in Anti-Intellectualism in American Life), which combines anti-expertism, anti-reason, and "unreflective instrumentalism" or the view that ideas must be put to work if they are to have any value.  I think that "isms" are, since they are theoretical inventions, either trivially easy to define (just quote the inventor), or else very difficult.  I don't feel qualified to evaluate this definition, but neither do I want to accept it uncritically.  Instead I'll just skip ahead to the part where Jurgenson starts using it to formulate some interesting questions and claims on behalf of geeks:

...are [geeks] evangelical/dogmatic in their knowledge justification (beyond populism)? Do they appreciate knowledge and thinking for its own sake, or does it always have to be put to work towards instrumental purposes? Neither of these points are substantiated by Sanger (yet).

I would argue that geeks are not dogmatic, but instead typically rely on reason (e.g., they employ reason in their defense of populism and the “wisdom of the crowds”; even if I and many others are unconvinced). Further, geeks indeed do seem to engage in knowledge projects for fun. Part of the success of Wikipedia is that it allows for purposelessly clicking through random entries for no other reason than because learning is fun. However, my task in this essay is to better conceptualize Sanger’s points and not really make the case for a geek intellectualism. I’m only half-convinced myself of these last two points. I’ll leave it to Sanger to describe how geeks are anti-intellectual on these other two dimensions of anti-intellectualism. Until then, the story of geek anti-intellectualism remains mixed.

I haven't encountered geek anti-intellectuals of a fideist stripe--those who regard faith as a cardinal virtue and who criticize over-reliance on reason and science.  Computer geeks are mostly scientistic rationalists, or at least, they try to be.  (Sometimes they seem not to be simply because so many of them aren't actually trained in rational argumentation or the scientific method.  They learned how to argue on Internet forums.)  If there is something weird about calling geeks intellectuals, it would surely be this.  Indeed, geek rationalism actually explains why there was such an interesting response to my essay.  It didn't surprise me that geeks replied in various highly rational ways to my essay.  That's not the sort of response that a lot of religious anti-intellectuals, of the sort Hofstadter studied, would have, if I had made them my target; they probably wouldn't have responded at all.

As to the second point (on "unreflective instrumentalism"), however, I think Jurgenson lets geekdom off far too easily.  Of course geeks "engage in knowledge projects for fun" (so do many religious fundamentalists).  But geeks frequently talk about how the humanities are useless (this ties in to my point (2)) and for that reason, a waste of time.  One of the recent geek arguments for the pointlessness of college is precisely that college features so much abstract theorizing which doesn't have any practical application.  A lot of geeks love books, to be sure, but some of them reject books not merely because they prefer ebook editions over paper books, but because they have become true believers that social media is what Clay Shirky described as an "upstart literature" which promises to become the "new high culture," just give it some time.  Besides, we often hear, books are becoming outmoded because they are not collaborative, and they're boring and irrelevant because they were not made recently.  And if you try to argue that college might have a non-vocational purpose, their eyes seem to glaze over.  They just don't get that.

Here's a couple of points elaborated, also probably related to "unreflective instrumentalism," or as I would put it, to the devaluing of theoretical knowledge.  First, if you diss the classics, if you reject the intellectual basis for Western civilization wholesale, as some silly-clever geeks (to say nothing of culture-warrior college professors) do, then by golly, you're anti-intellectual. This isn't because you are an instrumentalist, it is because you reject the conceptual-historical basis which allows you to think what you're thinking, including even the math and computer science that forms the basis of the computers you're working on.  If you ignore the giant shoulders you're standing on, and pretend to be thinking through issues a priori or innocent of all scholarship, then you'll certainly fall prey to all sorts of significant errors and confusions.  A person who pretends to be able to speak intelligently on the political issues of capitalist democracy but who has not read theorists like Locke, Rousseau, or Marx is almost certain to make various sophomoric mistakes (regardless of his political leanings).  And that's just one example from one field.  If you don't care about making such mistakes based in historical ignorance, and the whole idea of understanding the content of the ideas you're so passionate about leaves you cold, then you are to that extent not intellectual, and perhaps not really as much of a rationalist as you'd like to think of yourself.  If you go farther and say that persons who inform themselves of the intellectual underpinnings of Western civilization are wasting their time, then plainly, your contempt for the knowledge that people get from such study is so great that you do deserve to be called anti-intellectual.

Second, there's my point (4).  If you reject the necessity of learning things for yourself--if you actually endorse ignorance merely because certain things can be looked up so easily now--then you're anti-intellectual in the rather basic sense that you're anti-knowing stuff.  The three-part definition Jurgenson gives is ultimately grounded, I would argue, in this basic concept: an anti-intellectual undervalues knowledge for its own sake.  That's what explains the stance of anti-expertism, anti-reason, and unreflective instrumentalism.   And if you had any doubt about whether there were a lot of geeks who undervalue knowledge for its own sake, just look at the comments on my essay.  There, on Slashdot, and in other places you'll find plenty of people dismissing college not just because it's a poor economic decision but because the sort of theoretical knowledge you get in college is allegedly a waste of time.  The very claim is anti-intellectual.

It would be different if I saw many geeks hastening to add, after dissing lit crit and philosophy and political theory, that they really mainly have it in for an over-politicized academe, while they still do have respect for the likes of Aristotle and Locke, Michelangelo and Picasso, Thucydides and Gibbon, and for those intellectuals who, along with most scientists, continue to work in the old tradition of advancing knowledge instead of deconstructing it.  But I don't come across geeks saying things like this too often.

The people I'm describing use their minds (often professionally, and very competently), and therefore their minds have a life, so to speak.  But many do not go in for, in Jurgenson's phrase, "the life of the mind."  That involves some level of commitment to understanding the world, including the humanistic elements of the world, at an abstract level, bringing the tools of reason and science to bear.  Just because you write computer software and are fascinated by a few geeky topics, it doesn't follow that you have this commitment.

But then, a lot of academics don't, either.  As I said, it's no contradiction to speak of academic anti-intellectuals.  Their influence is no doubt one of the reasons certain geeks are anti-intellectual.

Geek anti-intellectualism: replies

My essay on "geek anti-intellectualism" hit a nerve.  I get the sense that a lot of geeks are acting--quite unusually for them--defensively, because I've presented them with a sobering truth about themselves that they hadn't realized.  Consequently they've been unusually thoughtful and polite.  This is quite new and startling to me--I mean, there's something about this discussion that I can't remember ever seeing before.  Anyway, it must have seemed relevant, because it was posted live on Slashdot within minutes of my submitting it--something I'd never seen before--and proceeded to rack up 916 comments, as of this writing, which is quite a few for Slashdot.  It was also well discussed on Metafilter, on Twitter, and here on this blog (where I've had over 160 comments so far).  What struck me about these discussions was the unusually earnest attempts, in most cases, to come to grips with some of the issues I raised.  Of course, there has been some of the usual slagging from the haters, and a fair number of not-very-bright responses, but an unusually high proportion of signal, some of it quite insightful.  Reminds me of some old college seminars, maybe.

First, let me concede that I left a lot unsaid.  Of course, what I left unsaid ended up being said, sometimes ad nauseam, in the comments, and a few points I found to be quite enlightening.  On the other hand, I find a lot of geeks thinking that they understand aspects of higher education that they really don't.  I'm not sure I can set them right, but I'll try to make a few points anyway.

I am going to do what I've always done, since the 1990s, when something I've written elicited a much greater response than I could possibly deal with: make a numbered laundry list of replies.

1. How dare you accuse all geeks of being anti-intellectual? I didn't; RTFA.  I know there are lots of very intellectual geeks and that geekdom is diverse in various ways.  I'm talking about social trends, which are always a little messy; but that doesn't mean there's nothing to discuss.

2. There's a difference between being anti-intellectual and being anti-academic. Maybe the most common response was that geeks don't dislike knowledge or the intellect, they dislike intellectuals with their academic institutions and practices.  First, let me state my geek credentials.  I've spent a lot of time online since the mid-90s.  I started many websites, actually learned some programming, and managed a few software projects.  You'll notice that I'm not in academe now.  I have repeatedly (four times) left academe and later returned.

I agree that academia has become way too politicized.  Too many academics think it's OK to preach their ideology to their students, and their tendency to organize conferences and journals around tendentious ideological themes is not just annoying, it is indeed unscholarly.  Moreover, speaking as a skeptically-inclined philosopher, I think that some academics have an annoying tendency to promote their views with unwarranted confidence, and also to pretend to speak authoritatively on subjects outside of their training.  Also, in many fields, the economics of academic advancement and publishing has created a tendency to focus on relatively unimportant minutiae, to the detriment of broader insight and scholarly wisdom.  Also, I completely agree that college work has been watered down (but more on that in the next point).

Having admitted all that, I'm still not backing down; I knew all that when I was writing my essay.  Please review the five points I made.  None of them is at odds with this critique of academe.  Just because some experts can be annoyingly overconfident, it doesn't follow that they do not deserve various roles in society articulating what is known about their areas of expertise.  If you deny that, then you are devaluing the knowledge they actually have; that's an anti-intellectual attitude.  If you want to know what the state of the research is in a field, you ask a researcher.  So even if your dislike of academics is justified in part, it does not follow that their word on their expertise is worth the same as everyone else's.  Besides, most of my points had little to do with academics per se: I also had points about books in general, classics in particular, and memorization and learning.

3. Just because you think college is now a bad deal, economically speaking, it doesn't follow that you're anti-intellectual. Well, duh.  I didn't really take up the question whether the present cost of college justifies not going, and I'm not going to get into that, because I don't really think it's relevant.  Let's suppose you're right, and that for some people, the long-term cost of college loans, combined with the fact that they won't get much benefit from their college education, means that they're justified not going.  My complaint is not about people who don't go to college, my complaint is about people who say that college is "a waste of time" if you do go and are committed.  Maybe, for people who don't study much and who don't let themselves benefit, it is a waste of time.  But that's their fault, not the fault of college.  I taught at Ohio State, which is not nearly as demanding as the college I attended myself (Reed), and I saw many students drifting through, not doing the reading, not coming to class, rarely practicing their writing skills.  I also saw people who always did the reading, always came to class, participated regularly, and were obviously benefiting from their encounter with great writing and great ideas.  Moreover, how college affects you isn't "the luck of the draw."  It depends on your commitment and curiosity.  This is why some partiers drop out and come back to college after five or ten years, and then they do great and finally enjoy themselves in class.

Finally, may I say again (I said it first in the 1990s, and also a few days ago), it is possible to get a degree by examination from programs like Excelsior College?  This way, you bypass the expense of college and pick all your instructors for a fraction of the cost.  This entails that you can get intellectually trained, as well as earn a real college degree, without going into debt.  This would be my advice to the clever ex-homeschoolers who claim that it is college that is, somehow, anti-intellectual.  Put up or shut up, home scholars: if you really are committed to the life of the mind, as you say, and you've already got experience directing your own studies, why not get a degree through independent study with academic tutors, and then take tests (and portfolio evaluations) to prove your knowledge and get the credential?

4. The people you're describing are not true geeks; they are the digerati, or "hipsters," or leftist academics who were already anti-intellectual and then started doing geek stuff. Uh, no.  I mean, you're probably right that some anti-intellectual thinkers who weren't geeks have started talking about the Internet a lot, and they have a big web presence, so now they might appear to be part of geekdom.  But they aren't really, by any reasonably stringent definition of "geek."  Besides, if you look at my article, you'll see that that's what I said (such people fall into the category of "digerati").  My point is that claims (1)-(5) started circulating online among geeks, and they are, each of them, commonly spouted by lots of geeks.  Take them in turn.  (1) Anti-expert animus is a well-known feature of the geek thought-world.  Wikipedia became somewhat anti-expert because of the dominance of geeks in the project.  (2) Of course, the geeks at Project Gutenberg love books, but all too often I see comments online that books went out in the 20th century, and good riddance.  One of the leading idols of the geeks, Clay Shirky, essentially declared books to be a dying medium, to be replaced with something more collaborative.  (3) It is obvious just from the comments here on this blog, and elsewhere, that some geeks find the classics (that means philosophy, history, novels, epics, poetry, drama, religious texts, etc.)  to be a waste of time.  They don't have the first clue about what they're talking about.  (4) The first time I saw the idea discussed much that Internet resources mean we no longer have to memorize (and hence learn) as many facts was among Wikipedians in 2002 or so (when it was totally dominated by geeks, even more than it is now).  (5) The whole college-is-a-waste-of-time thing is a not uncommon geek conceit.  It's not surprising in the least that a founder of Paypal.com would spout it.  It's easy for computer geeks to say, because they can get well-paying jobs without degrees.  In many other fields, that's (still) not true.

5. But I'm an intellectual, and I know that learning facts is indeed passe.  The things to be learned are "relationships" or "analysis" or "critical thinking." Oh?  Then I claim that you are espousing an anti-intellectual sentiment, whether you know it or not.  I'm not saying you're opposed to all things intellectual, I'm saying that that opinion is, to be perfectly accurate, a key feature of anti-intellectualism.  Look, this is very simple.  If you have learned something, then you can, at the very least, recall it.  In other words, you must have memorized it, somehow.  This doesn't necessarily mean you must have used flashcards to jam it into your recalcitrant brain by force, so to speak.  Memorization doesn't have to be by rote.  But even if you do a project, if you haven't come to remember some fact as a result, then you don't know it.  Thus I say that to be opposed to the memorization of facts is to be opposed to the learning, and knowing, of those facts.  To advocate against all memorization is to advocate for ignorance.  For more on this, please see my EDUCAUSE Review essay "Individual Knowledge in the Internet Age."

I know that this is an old and common sentiment among education theorists--which is a shame.  Indeed, the educationists who say that it is not necessary to memorize the multiplication table are implying that it is OK for kids to be ignorant of those math facts.  (No, it's not OK.  They should know them.)  Anyway, it might have started with misguided educators, but it is becoming far too common among geeks too.

6. The Internet is changing, that's all.  Most people are anti-intellectual, and they're getting online. No doubt about it, the Internet has changed greatly in the last five to ten years.  And it might well be the case that the average netizen is more anti-intellectual than in the past, in the very weak sense that more stupid people and uneducated people are getting online.  This might have been clever to say, if my point had been, "Folks online seem to be getting anti-intellectual."  But that isn't at all what I said or meant.  If you will review the evidence I marshalled, you'll see that the people I'm talking about are not the great unwashed masses.  I'm talking about geeks and the digerati who presume to speak about geeky things.  And their influence, as I said, has been growing.

7. Americans are anti-intellectual.  Geek anti-intellectualism is just a reflection of that. Think about what you're saying here; it doesn't make much sense.  I claim that geeks are increasingly anti-intellectual, or increasingly giving voice to anti-intellectual sentiments.  This is a trend, which many people are discussing now because they recognize it as well.  American anti-intellectualism, a well-known phenomenon, goes back to colonial days, and was rooted in our distance from the erstwhile European sources of intellectual life as well as the physical difficulty of frontier life.  The pattern of anti-intellectualism I discern is a relatively recent phenomenon, which has grown up especially with the rise of the Internet.

8. Conservatives never were the anti-intellectuals; it was always the liberal lefties! Glenn Reynolds linked my post, and so some conservatives grumbled about my line, "Once upon a time, anti-intellectualism was said to be the mark of knuckle-dragging conservatives, and especially American Protestants.  Remarkably, that seems to be changing."  Well, I hate to wade into politics here.  I used the passive voice deliberately, because I did not want to endorse the claim that anti-intellectualism is the mark of "knuckle-dragging conservatives" (I don't endorse this phrase, either).  All I meant to say is that this is one of liberals' favorite things to say about American fundamentalists.  I was about to, but did not, go on to say that actually, among the home schooling crowd, liberals and libertarians tend to go in for "unschooling," which is relatively (and not necessarily) hostile to traditional academics, and it is conservatives who go in for  uber-academic Latin-and-logic "classical education."  I didn't say that, because I knew it would be distracting to my point.  So I'm kind of sorry I made the remark about conservatives, because it too was distracting to my point.  Suffice it to say that there are plenty of knuckle-draggers, so to speak, everywhere.

9. Are you crazy?  Geeks are smart, and you're calling geeks stupid by calling them anti-intellectual. You didn't know that "anti-intellectual" does not mean "stupid," apparently.  There are plenty of anti-intellectual geeks who are crazy smart.  They aren't stupid in the least.  You also must distinguish between having anti-intellectual attitudes or views, which is what I was talking about, and having anti-intellectual practices. There are plenty of intellectuals in academia who are anti-intellectual.  (There are Jewish anti-Semites, too.)  Just think of any progressive education professor who inveighs against most academic work in K-12 schools, describes academic work that involves a little memorization and practice as "drill and kill," wants the world to institute unschooling and the project method en masse, has nothing but the purest P.C. contempt for the Western canon, advocates for vocational education for all but those who are truly, personally enthusiastic about academics, wants academic education to be as collaborative as possible rather than requiring students to read books, which are "irrelevant" to the fast-changing daily lives of students, and channeling Foucault rails against the hegemony of scientists and other experts.  Well, such a person I would describe as an anti-intellectual intellectual.  The person might well write perfectly-crafted articles with scholarly apparatus, read classics in her field, and so forth.  It's just that her opinions are unfortunately hostile to students getting knowledge (in my opinion).

10. But the liberal arts are a waste of time.  Studying Chaucer?  Philosophy?  History?  The vague opinionizing is pointless and facts can be looked up. If you believe this way, then I have to point out that virtually any really educated person will disagree with you.  Once you have received a liberal education, your mind expands.  You might not understand how, or why it's important, but it does.  That's why people devote their lives to this stuff, even when it doesn't pay much, as it usually doesn't.  If you haven't studied philosophy, you can't begin to understand the universe and our place in it--I don't care how much theoretical physics you've studied.  There are aspects of reality that can be grasped only by critically examining the content of our concepts.  Similarly, if you haven't read much literature and especially if you are young, then you are very probably a complete babe in the woods when it comes to the understanding of human nature and the human condition; that's why people read literature, not so that they can sniff disdainfully at others over their lattes.

11. What you call "anti-intellectual" is really "anti-authority."  You're merely defending the prerogatives of snooty intellectuals whose authority is on the wane. This is one of the most common and often snarkiest replies I've run across.  But it's also a very interesting point.  Still, on analysis, I'm going to call it flimsy at best.  I'm going to spend quite a bit of space on this one.  Feel free to skip to down to the end ("In Sum" before "Conclusion").

Let's distinguish between being opposed to knowledge in its various forms, on the one hand, and being opposed to the prerogatives of intellectuals, on the other.  I claim that the path many geeks are headed down really has them opposed to theoretical and factual knowledge per se. I think the evidence I offered supported this reasonably well, but let me try to make it a little more explicit.

Consider point (1), about experts.  ("Experts do not deserve any special role in declaring what is known.")  That certainly looks like it is about the prerogatives of experts.  If for example on Wikipedia I encountered people saying, for example, "Experts need to prove this to us, not just assert their authoritah," that would be fair enough.  That's not anti-intellectual at all.  But going farther to say, "You merely have access to resources, you don't understand this any better than I do" and "You're not welcome here" is to fail to admit that through their study and experience, the experts have something more to contribute than the average Joe.  If you can't bring yourself to admit that--and I submit that the stripe of geek I'm describing can't--then your attitude is anti-intellectual.  (Some people are refreshingly honest about just this.)  Then what you're saying is that specialized study and experience do not lead to anything valuable, and are a waste of time.  But they lead to knowledge, which is valuable, and not a waste of time.

Point (2) (that books per se are outmoded) also, admittedly, has a little to do with intellectual authority--but only a little.  One of the reasons that some geeks, and others, are welcoming the demise of books is that they resent a single person set up as an authority by a publisher.  They say that publishing can and should be more like a conversation, and in a conversation, there shouldn't be one "authority," but rather a meeting of equal minds.  So perhaps those who are pleased to attack the medium of books couch their views as an attack on authority.  Perhaps.  But when I defend books, I really don't care about authority so much.  Of course, when thinking adults read books, they don't read them it in order to receive the truth from on high.  They are interested (in argumentative books, to take just one kind) in a viewpoint being fully and intelligently canvassed.  As some of the geeks commenting do not realize, and as some people don't realize until they get to graduate school, it frequently requires a book--or several books--to fully articulate a case for some relatively narrow question.  Scholars should be praised, not faulted, for being so committed to the truth that they are willing to write, and read, discussions that are that long.  The fact that publishers have to pick authors who are capable of mounting excellent arguments at such length doesn't mean that their readers are supposed simply to accept whatever they are told.  At bottom, then, to oppose books as such is to be opposed to the only way extended verbal arguments (and narratives and exposition) can be propagated.  An indeterminately large collaboration can't develop a huge, integrated, complex theory, write a great novel, or develop a unified, compelling narrative about some element of our experience.  If you want to call yourself intellectual, you've got to support the creation of such works by individual people.

Point (3), about the classics, has almost nothing to do with the prerogatives of authority.  The shape of the Western Canon, if you will, does not rest on anybody's authority, but instead on the habits of educators (school and university) as an entire class.  You're not rebelling against anybody's authority when you rebel against classics; you are, if anything, rebelling against the ideas the classics contain, or against the labor of reading something that is demanding to read.  In any case, anybody who comes down squarely against reading the classics is, to that extent, decidedly anti-intellectual.  Face it.

Point (4), which has us memorizing as little as possible and using the Internet as a memory prosthesis as much as possible, has absolutely nothing to do with authority.  If you're opposed to memorizing something, you're opposed to learning and knowing it.  That's quite anti-intellectual.

Point (5) concerns college, and on this many people said, in effect, "I oppose the stupidity of an overpriced, mediocre, unnecessary product that rests on the alleged authority of college professors."  Then it looks like you're criticizing the authority of professors, and so you think I'm defending that.  Well, to be sure, if college professors had no significant knowledge, which (as I think) gives their views some intellectual authority, then there would be no point in paying money to study with them.  But I can defend the advisability of systematic college-level study (I choose these words carefully) without making any controversial claims about the authority of college professors.  I do not, for example, have to assume that college professors must always be believed, that they are infallible, that we should not be skeptical of most of what they say (especially in the humanities and social sciences).  After all, most professors expect their students to be skeptical and not to take what they say uncritically; and only a very dull student will do that, anyway.  If you didn't know that, it's probably because you haven't been to college.  So, no.  I am not merely defending the authority of college professors.  I am personally quite critical of most scholarship I encounter.

In sum, I know that libertarian geeks (I'd count myself as one, actually) love to rail against the prerogatives of authority.  You'd like to justify your anti-intellectual attitudes (and sometimes, behavior) as fighting against The Man.  Maybe that is why you have your attitudes, maybe not.  In any case, that doesn't stop said attitudes from being anti-intellectual, and your issues don't mean that I am especially concerned to defend the prerogatives of authority.  I am not.


I think I've hit most of the high points.

One thing I didn't discuss in my original essay was why geeks have become so anti-intellectual, especially with the rise of the Internet.  Here is my take on that.  Most geeks are very smart, predominantly male, and capable of making an excellent livelihood from the sweat of their minds.  Consequently, as a class, they're more arrogant than most, and they naturally have a strong independent streak.  Moreover, geeks pride themselves on finding the most efficient ("laziest") way to solve any problem, even if it is a little sloppy.  When it comes to getting qualified for work, many will naturally dismiss the necessity of college if they feel they can, because they hate feeling put-upon by educators who can't even write two lines of code.  And the whole idea of memorizing stuff, well, it seems more and more unnecessarily effortful when web searching often uncovers answers just as well (they very misguidedly think).  What about books, and classics in particular?  Well, geek anti-intellectual attitudes here are easily explained as a combination of laziness and arrogance.  The Iliad takes a lot of effort, and the payoff is quite abstract; instead, they could read a manual or write code or engineer some project, and do a lot more of what they recognize as "learning."  The advent of new social media and the decline of the popularity of books are developments that only confirm their attitude.  It doesn't hurt that geek is suddenly chic, which surely only inflates geek arrogance.  If they admit to themselves that there is something to philosophy, history, or anything else that takes time, hard study, and reflection to learn, but which does not produce code or gadgetry, then they would feel a little deflated.  This doesn't sit well with their pride, of course.  They're smart, they think, and so how could they be opposed to any worthwhile knowledge?

So it shouldn't be surprising that some (only some) geeks turn out to be anti-intellectual.  This is no doubt why many people said, in response to my essay, "This is just what I've been thinking."

Is there a new geek anti-intellectualism?

Is there a new anti-intellectualism?  I mean one that is advocated by Internet geeks and some of the digerati.  I think so: more and more mavens of the Internet are coming out firmly against academic knowledge in all its forms.  This might sound outrageous to say, but it is sadly true.

Let's review the evidence.

1. The evidence

Programmers have been saying for years that it's unnecessary to get a college degree in order to be a great coder--and this has always been easy to concede.  I never would have accused them of being anti-intellectual, or even of being opposed to education, just for saying that.  It is just an interesting feature of programming as a profession--not evidence of anti-intellectualism.

In 2001, along came Wikipedia, which gave everyone equal rights to record knowledge.  This was only half of the project's original vision, as I explain in this memoir.  Originally, we were going to have some method of letting experts approve articles.  But the Slashdot geeks who came to dominate Wikipedia's early years, supported by Jimmy Wales, nixed this notion repeatedly.  The digerati cheered and said, implausibly, that experts were no longer needed, and that "crowds" were wiser than people who had devoted their lives to knowledge.  This ultimately led to a debate, now old hat, about experts versus amateurs in the mid-2000s.  There were certainly notes of anti-intellectualism in that debate.

Around the same time, some people began to criticize books as such, as an outmoded medium, and not merely because they are traditionally paper and not digital.  The Institute for the Future of the Book has been one locus of this criticism.

But nascent geek anti-intellectualism really began to come into focus around three years ago with the rise of Facebook and Twitter, when Nicholas Carr asked, "Is Google making us stupid?" in The Atlantic. More than by Carr's essay itself, I was struck by the reaction to it.  Altogether too many geeks seemed to be assume that if information glut is sapping our ability to focus, this is largely out of our control and not necessarily a bad thing.  But of course it is a bad thing, and it is in our control, as I pointed out. Moreover, focus is absolutely necessary if we are to gain knowledge.  We will be ignoramuses indeed, if we merely flow along with the digital current and do not take the time to read extended, difficult texts.

Worse still was Clay Shirky's reaction in the Britannica Blog, where he opined, "no one reads War and Peace. It’s too long, and not so interesting," and borrows a phrase from Richard Foreman in claiming, "the ‘complex, dense and “cathedral-like” structure of the highly educated and articulate personality’ is at risk."  As I observed at the time, Shirky's views entailed that Twitter-sized discourse was our historically determined fate, and that, if he were right, the Great Books and civilization itself would be at risk.  But he was not right--I hope.

At the end of 2008, Don Tapscott, author of Wikinomics, got into the act, claiming that Google makes memorization passe.  "It is enough that they know about the Battle of Hastings," Tapscott boldly claimed, "without having to memorise that it was in 1066.  [Students] can look that up and position it in history with a click on Google."

In 2010, Edge took up the question, "Is the Internet changing the way you think?" and the answers were very sobering.  Here were some extremely prominent scientists, thinkers, and writers, and all too many of them were saying again, more boldly, that the Internet was making it hard to read long pieces of writing, that books were passe, and that the Internet was essentially becoming a mental prosthesis.  We were, as one writer put it, uploading our brains to the Internet.

As usual, I did not buy the boosterism.  I was opposed to the implicit techno-determinism as well as the notion that the Internet makes learning unnecessary.  Anyone who claims that we do not need to read and memorize some facts is saying that we do not need to learn those facts.  Reading and indeed memorizing are the first, necessary steps in learning anything.

This brings us to today.  Recently, Sir Ken Robinson has got a lot of attention by speaking out--inspiringly to some, outrageously to others--saying that K-12 education needs a sea change away from "boring" academics and toward collaborative methods that foster "creativity."  At the same time, PayPal co-founder Peter Thiel sparked much discussion by claiming that there is a "higher education bubble," that is, the cost of higher education greatly exceeds its value.  This claim by itself is somewhat plausible.  But Thiel much less plausibly implies that college per se is now not recommendable for many, because it is "elitist."  With his Thiel Fellowship program he hopes to demonstrate that a college degree is not necessary for success in the field of technology.  Leave it to a 19-year-old recipient of one of these fellowships to shout boldly that "College is a waste of time."  Unsurprisingly, I disagree.

2. Geek anti-intellectualism

In the above, I have barely scratched the surface.  I haven't mentioned many other commentators, blogs, and books that have written on such subjects.  But this is enough to clarify what I mean by "geek anti-intellectualism."  Let me step back and sum up the views mentioned above:

1. Experts do not deserve any special role in declaring what is known.  Knowledge is now democratically determined, as it should be.  (Cf. this essay of mine.)

2. Books are an outmoded medium because they involve a single person speaking from authority.  In the future, information will be developed and propagated collaboratively, something like what we already do with the combination of Twitter, Facebook, blogs, Wikipedia, and various other websites.

3. The classics, being books, are also outmoded.  They are outmoded because they are often long and hard to read, so those of us raised around the distractions of technology can't be bothered to follow them; and besides, they concern foreign worlds, dominated by dead white guys with totally antiquated ideas and attitudes.  In short, they are boring and irrelevant.

4. The digitization of information means that we don't have to memorize nearly as much.  We can upload our memories to our devices and to Internet communities.  We can answer most general questions with a quick search.

5. The paragon of success is a popular website or well-used software, and for that, you just have to be a bright, creative geek.  You don't have to go to college, which is overpriced and so reserved to the elite anyway.

If you are the sort of geek who loves all things Internet uncritically, then you're probably nodding your head to these.  If so, I submit this as a new epistemological manifesto that might well sum up your views:

You don't really care about knowledge; it's not a priority.  For you, the books containing knowledge, the classics and old-fashioned scholarship summing up the best of our knowledge, the people and institutions whose purpose is to pass on knowledge--all are hopelessly antiquated.  Even your own knowledge, the contents of your mind, can be outsourced to databases built by collaborative digital communities, and the more the better.  After all, academics are boring.  A new world is coming, and you are in the vanguard.  In this world, the people who have and who value individual knowledge, especially theoretical and factual knowledge, are objects of your derision.  You have contempt for the sort of people who read books and talk about them--especially classics, the long and difficult works that were created alone by people who, once upon a time, were hailed as brilliant.  You have no special respect for anyone who is supposed to be "brilliant" or even "knowledgeable."  What you respect are those who have created stuff that many people find useful today.  Nobody cares about some Luddite scholar's ability to write a book or get an article past review by one of his peers.  This is why no decent school requires reading many classics, or books generally, anymore--books are all tl;dr for today's students.  In our new world, insofar as we individually need to know anything at all, our knowledge is practical, and best gained through projects and experience.  Practical knowledge does not come from books or hard study or any traditional school or college.  People who spend years of their lives filling up their individual minds with theoretical or factual knowledge are chumps who will probably end up working for those who skipped college to focus on more important things.

Do you find your views misrepresented?  I'm being a bit provocative, sure, but haven't I merely repeated some remarks and made a few simple extrapolations?  Of course, most geeks, even most Internet boosters, will not admit to believing all of this manifesto.  But I submit that geekdom is on a slippery slope to the anti-intellectualism it represents.

So there is no mistake, let me describe the bottom of this slippery slope more forthrightly.  You are opposed to knowledge as such. You contemptuously dismiss experts who have it; you claim that books are outmoded, including classics, which contain the most significant knowledge generated by humankind thus far; you want to memorize as little as possible, and you want to upload what you have memorized to the net as soon as possible; you don't want schools to make students memorize anything; and you discourage most people from going to college.

In short, at the bottom of the slippery slope, you seem to be opposed to knowledge wherever it occurs, in books, in experts, in institutions, even in your own mind.

But, you might say, what about Internet communities?  Isn't that a significant exception?  You might think so.  After all, how can people who love Wikipedia so much be "opposed to knowledge as such"?  Well, there is an answer to that.

It's because there is a very big difference between a statement occurring in a database and someone having, or learning, a piece of knowledge.  If all human beings died out, there would be no knowledge left even if all libraries and the whole Internet survived.  Knowledge exists only inside people's heads.  It is created not by being accessed in a database search, but by being learned and mastered.  A collection of Wikipedia articles about physics contains text; the mind of a physicist contains knowledge.

3. How big of a problem is geek anti-intellectualism?

Once upon a time, anti-intellectualism was said to be the mark of knuckle-dragging conservatives, and especially American Protestants.  Remarkably, that seems to be changing.

How serious am I in the above analysis?  And is this really a problem, or merely a quirk of geek life in the 21st century?

It's important to bear in mind what I do and do not mean when I say that some Internet geeks are anti-intellectuals.  I do not mean that they would admit that they hate knowledge or are somehow opposed to knowledge.  Almost no one can admit such a thing to himself, let alone to others.  And, of course, I  doubt I could find many geeks who would say that students should not graduate from high school without learning a significant amount of math, science, and some other subjects as well.  Moreover, however they might posture when at work on Wikipedia articles, most geeks have significant respect for the knowledge of people like Stephen Hawking or Richard Dawkins, of course.  Many geeks, too, are planning on college, are in college, or have been to college.  And so forth--for the various claims (1)-(5), while many geeks would endorse them, they could also be found contradicting them regularly as well.  So is there really anything to worry about here?

Well, yes, there is.  Attitudes are rarely all or nothing.  The more that people have these various attitudes, the more bad stuff is going to result, I think.  The more that a person really takes seriously that there is no point in reading the classics, the less likely he'll actually take a class in Greek history or early modern philosophy.  Repeat that on a mass scale, and the world becomes--no doubt already has become--a significantly poorer place, as a result of the widespread lack of analytical tools and conceptual understanding.  We can imagine a world in which the humanities are studied by only a small handful of people, because we already live in that world; just imagine the number of people getting smaller.

But isn't this just a problem just for geekdom?  Does it really matter that much if geeks are anti-intellectuals?

Well, the question is whether the trend will move on to the population at large.  One does not speak of "geek chic" these days for nothing.  The digital world is now on the cutting edge of societal evolution, and attitudes and behaviors that were once found mostly among geeks back in the 1980s and 1990s are now mainstream.  Geek anti-intellectualism can already be seen as another example.  Most of the people I've mentioned in this essay are not geeks per se, but the digerati, who are frequently non-geeks or ex-geeks who have their finger on the pulse of social movements online.  Via these digerati, we can find evidence of geek attitudes making their way into mainstream culture.  One now regularly encounters geek-inspired sentiments from business writers like Don Tapscott and education theorists like Ken Robinson--and even from the likes of Barack Obama (but not anti-intellectualism, of course).

Let's just put it this way.  If, in the next five years, some prominent person comes out with a book or high-profile essay openly attacking education or expertise or individual knowledge as such, because the Internet makes such things outmoded, and if it receives a positive reception not just from writers at CNET and Wired and the usual suspects in the blogosphere, but also serious, thoughtful consideration from Establishment sources like The New York Review of Books or Time, I'll say that geek anti-intellectualism is in full flower and has entered the mainstream.

UPDATE: I've posted a very long set of replies.

UPDATE 2: I've decided to reply below as well--very belatedly...

25 Replies to Maria Bustillos

In a recent essay, in The Awl ("Wikipedia and the Death of the Expert"), Maria Bustillos commits a whole series of fallacies or plain mistakes and, unsurprisingly, comes to some quite wrong conclusions.  I don't have time to write anything like an essay in response, but I will offer up the following clues for Ms. Bustillos and those who are inclined to nod approvingly with her essay:

1. First, may I point out that not everybody buys that Marshall McLuhan was all that.

2. The fact that Nature stood by its research report (which was not a peer-reviewed study) means nothing whatsoever.  If you'll actually read it and apply some scholarly or scientific standards, Britannica's response was devastating, and Nature's reply thereto was quite lame.

3. There has not yet been anything approaching a credible survey of the quality of Wikipedia's articles (at least, not to my knowledge).  Nobody has shown, in studies taken individually or in aggregate, that Wikipedia's articles are even nearly as reliable as a decent encyclopedia.

4. If you ask pretty much anybody in the humanities, you will learn that the general impression that people have about Wikipedia articles on these subjects is that they are appalling and not getting any better.

5. The "bogglingly complex and well-staffed system for dealing with errors and disputes on Wikipedia" is a pretentious yet brain-dead mob best likened to the boys of The Lord of the Flies.

6. It is trivial and glib to say that "Wikipedia is not perfect, but then no encyclopedia is perfect."  You might as well say that the Sistine Chapel is not perfect.  Yeah, that's true.

7. It is not, in fact, terribly significant that users can "look under the hood" of Wikipedia.  Except for Wikipedia's denizens and those unfortunate enough to caught in the crosshairs of some zealous Wikipedians using the system to commit libel without repercussion, nobody really cares what goes on on Wikipedia's talk pages.

8. When it comes to actually controversial material, the only time that there is an "attempt to strike a fair balance of views" in Wikipedia-land is when two camps with approximately equal pull in the system line up on either sides of an issue.  Otherwise, the Wikipedians with the greatest pull declare their view as "the neutral point of view."  It wasn't always this way, but it has become that way all too often.

9. I too am opposed to experts exercising unwarranted authority.  But there is an enormous number of possibilities between a world dominated by unaccountable whimsical expert opinion and a world without any experts at all.  Failing to acknowledge this is just sloppiness.

10. If you thought that that Wikipedia somehow meant the end of expertise, you'd be quite wrong.  I wrote an essay about that in Episteme. (Moreover, in writing this, I was criticized for proving something obvious.)

11. The fact that Marshall McLuhan said stuff that presciently supported Wikipedia's more questionable epistemic underpinnings is not actually very impressive.

12. Jaron Lanier has a lot of very solid insight, and it is merely puzzling to dismiss him as a "snob" who believes in "individual genius and creativity."  There's quite a bit more to Lanier and "Digital Maoism" than that.  Besides, are individual genius and creativity now passe?  Hardly.

13. Clay Shirky isn't all that, either.

14. Being "post-linear" and "post-fact" is not "thrilling" or profound.  It's merely annoying and tiresome.

15. Since when did the Britannica somehow stand for guarantees of truth?  Whoever thought so?

16. There are, of course, vast realms between the extremes of "knowledge handed down by divine inspiration" and some dodgy "post-fact society."

17. The same society can't both be "post-fact" and thrive on "knowledge [that] is produced and constructed by argument," Shirky notwithstanding.  Arguments aim at truth, i.e., to be fact-stating, and truth is a requirement of knowledge.  You can't make sense of the virtues of dialectical knowledge-production without a robust notion of truth.

18. Anybody who talks glowingly about the elimination of facts, or any such thing, simply wants the world to be safe for the propagation of his ideology by familiar, manipulable, but ultimately irrational social forces.  No true liberal can be in favor of a society in which there are no generally-accepted, objective standards of truth, because then only illiberal forces will dominate discourse.

19. Expert opinion is devalued on Wikipedia, granted-and maybe also on talk radio and its TV spin-offs, and some Internet conversations.  But now, where else in society has it been significantly devalued?

20. What does being a realist about expertise--i.e., one who believes it does exist, who believes that an expert's opinion is, on balance, more likely to be true than mine in areas of his expertise--have to do with individualism?  Surely it's more the romantic individualists who want to be unfettered by the requirements of reason, including the scientific methods and careful reasoning of experts, who are naturally inclined to devalue expertise per se.

21. Wikipedia does not in any plausible way stand for a brave new world in which competing arguments hold sway over some (fictional) monolithic expert opinion.  There have always been competing expert views; Wikipedia merely, sometimes, expresses those competing expert views when, from some professors, you might hear only one side.  Sometimes, Wikipedia doesn't even do that, because the easy politicization of collaborative text written without enforceable rules makes neutrality an elusive ideal.

22. Um, we have had the Internet for more than 20 years.

23. The writing down of knowledge is more participatory now, and that's a fine thing (or can be).  But knowledge itself is, always has been, and always will be an individual affair.  The recording of things that people take themselves to know, in Wikipedia or elsewhere, and findable via Google, does not magically transfer the epistemic fact of knowledge from the recorder even to those who happen to find the text, much less to all readers online.  Knowledge itself is considerably more difficult than that.

24. Ours is an individualistic age?  Don't make me laugh.  People who actually think for themselves--you know, real individualists--appear to me to be as rare as they ever have been.  It is a delight to meet the few who are out there, and one of the better features of the Internet is that it makes it easier to find them.  The West might be largely capitalist, but that doesn't stop us from being conformist, as any high school student could tell you.

25. The real world is considerably more complex than your narrative.

How the Internet Is Changing What We (Think We) Know

A speech for "the locals"--Upper Arlington Public Library, January 23, 2008.  This is a more general discussion; the Citizendium is not mentioned once.

Information is easy, knowledge is difficult

There is a mind-boggling amount of information online. And this is a wonderful thing. I’m serious about that. A good search engine is like an oracle: you can ask it any question you like and be sure to get an answer. The answer might be exactly what you’re looking for, or it might be, well, oracular—difficult to interpret and possibly incorrect. I draw the usual distinction between knowledge and information. You can find information online very easily. Knowledge is another matter altogether.

Now, this is not something new about the Internet. It’s a basic feature of human life that while information is easy, knowledge is difficult. There has never been a shortage of mere data and opinion in human life. It’s a very old observation that the most ignorant people are usually full of opinions, while many of the most knowledgeable people are full of doubt. Other people are certainly sources of knowledge, but they are also sources of half-truths, confusion, misinformation, and lies. If we simply want information from others, it is easy to get; if we want knowledge in any strong sense of the word, it is very difficult. Besides that, long before the Internet, there was far more to read, far more television shows and movies to watch, than anyone could ever absorb in many lifetimes. Before the Internet, we were already awash in information. Wading through all that information in search of some hard knowledge was very difficult indeed.

Too Much InformationThe Internet is making this old and difficult problem even worse. If we had an abundance of information in, say, the 1970s, the Internet has created a superabundance of information today. Out of curiosity, I looked up some numbers. According to one estimate, there are now over 1.2 billion people online; Netcraft estimated that there are over 100 million websites, and about half of those are active. And those estimates come from over a year ago.

With that many people, and that many active websites, clearly there is, as I say, a superabundance of information. Nielsen ratings of Internet search showed that there were some six billion searches performed in December, 2007, in one month—that’s about 72 billion in a year! Google, by the way, was responsible for two thirds of those searches. Now, you might have heard these numbers before; I don’t mean to be telling you news. But I want to worry out loud about a consequence of this situation.

My worry is that the superabundance of information is devaluing knowledge. The more that information piles up on Internet servers around the world, and the easier it is for that information to be found, the less distinctive and attractive that knowledge will appear by comparison. I fear that the Internet has already greatly weakened our sense of what is distinctive about knowledge, and why it is worth seeking. I know this might seem rather abstract, and not something worth getting worked up about. Why, really, should you care?

It used to be that in order to learn some specific fact, like the population of France, you had to crack open a big thick paper encyclopedia or other reference book. One of the great things about the Internet is that that sort of searching—for very specific, commonly-sought-after facts—has become dead simple. Even more, there are many facts one can now find online that, in the past, would have taken a trip to the local library to find. The point is that the superabundance of information has actually made it remarkably easy to get information. Today, it’s easy not just to get some information about something or other, it’s easy to get boatloads of information about very specific questions and topics we’re interested in.

For all that, knowledge is, I’m afraid, not getting much easier. To be quite sure of an answer still requires comparing multiple sources, critical thinking, sometimes a knowledge of statistics and mathematics, and a careful attention to detail when it comes to understanding texts. In short, knowledge still requires hard thought. Sure, technology is a great time-saver in various ways; it has certainly made research easier, and it will become only more so. But the actual mental work that results in knowledge of a topic cannot be made much easier, simply because no one else can do your thinking for you. So while information becomes nearly instantaneous and dead simple, knowledge is looking like a doddering old uncle.

What do I mean by that? Well, you can find tons of opinions online, ready-made, but there is an interesting feature of a lot of the information and opinion you find online: not only is it easy to find, it is easy to digest. Just think of the different types of pages that a typical Web search turns up: news articles, which summarize events for the average person; blogs, which are usually very brief; Web forums, which only rarely go into depth; and encyclopedia articles and other mere summaries of topics. Of course, there are also very good websites, as well as the “Deep Web,” which contains things like books and journal articles and white papers; but most people do not use those other resources. The point is that most of the stuff that you typically find on the Internet is pretty lightweight. It’s Info Lite.

“Right,” you say, “what’s wrong with that? Great taste, less filling!” Sure, I like easy, entertaining information as much as the next guy. But what’s wrong with it is that it makes the hard work of knowledge much less appealing by comparison. For example, if you are coming to grips with what we should do about global warming, or illegal immigration, or some other very complex issue, you must escape the allure of all the dramatic and entertaining news articles and blog posts on these subjects. Instead, you must be motivated to wade through a lot of far drier material. The sources that are more likely to help you in your quest for knowledge look very boring by comparison. My point here is that the superabundance of information devalues knowledge, because the means of solid knowledge are decidedly more difficult and less sexy than the Info Lite that it is so easy to find online.

There is another way that the superabundance of information makes knowledge more difficult. It is that, for all the terabytes upon terabytes of information on the Internet, society does not employ many more (and possibly fewer) editors than it had before the advent of the Internet. When you go to post something on a blog or a Web forum, there isn’t someone called an editor who decides to “publish” your comment. The Internet is less a publishing operation than a giant conversation. But most of us still take in most of what we read fairly passively. Now, there’s no doubt that what has been called the “read-write Web” encourages active engagement with others online, and helps us overcome our passivity. This is one of the decidedly positive things about the Internet, I think: it gets people to understand that they can actively engage with what they read. We understand now more than ever that we can and should read critically. The problem, however, is that, without the services of editors, we need our critical faculties to be engaged and very fine-tuned. So, while the Internet conversation has instilled in us a tendency to read critically, still, without the services of editors, there is far more garbage out there than our critical faculties can handle. We do end up absorbing a lot of nonsense passively: we can’t help it.

In short, we are reading reams of content written by amateurs, without the benefit of editors, which means we must as it were be our own editors. But many of us, I’m afraid, do not seem to be prepared for the job. In my own long experience interacting with Internet users, I find heaps of skepticism and little respect for what others write, regardless of whether it is edited or not. Now, skepticism is all well and good. But at the same time, I find hardly anything in the way of real critical thinking. The very opinionated people I encounter online rarely demonstrate that they have thought things through as they should, given their strength of convictions. I have even encountered college professors who cite easy-to-find news articles in the commission of the most elementary of logical fallacies. So it isn’t necessarily just a lack of education that accounts for the problem I’m describing. Having “information at our fingertips,” clearly, sometimes makes us skip the hard thinking that knowledge requires. Even those of us who ought to know better are too often content to be impressed by the sheer quantity and instant availability of information, and let it substitute for their own difficult thought.

The nature and value of knowledge

Easy information devalues hard knowledge, I say. But so far I have merely been appealing to your understanding of the nature and value of knowledge. Someone might ask me: well, what do you mean by knowledge, anyway, that it is so different from mere information? And why does it matter?

Philosophers since Plato have been saying that knowledge is actually a special kind of belief. It must be true, first of all, and it must also be justified, or have good reasons or evidence to support it. For example, let’s suppose I read something for the first time on some random blog, such as that Heath Ledger died. Suppose I just uncritically believe this. Well, even if it’s true, I don’t know that it is true, because random blogs make up stuff all the time. A blog saying something really isn’t a good enough reason to believe it. But if I then read the news in a few other, more credible sources, then my belief becomes much better justified, and then I can be said to know.

Now, I don’t want to go into a lot of unnecessary details and qualifications, which I could, at this point. So let me get right to my point. I say knowledge is, roughly speaking, justified, true belief. Well then, I want to add that knowledge is difficult not because getting truth is difficult, but because justifying our beliefs is. In other words, it’s really easy to get truth. Google is a veritable oracle of truth. The problem is recognizing truth, and distinguishing it from falsehood. The ocean of information online contains a huge amount of truth. The difficulty comes in knowing when you’ve got it.

Well, that’s what justification is for. We use reasons, or evidence, to determine that, indeed, if we accept a piece of information, we will have knowledge, not error. But producing a good justification for our beliefs is extremely difficult. It requires, as I said before, good sources, critical thinking, sometimes a knowledge of statistics and mathematics, and a careful attention to detail when it comes to understanding texts. This all takes time and energy, and while others can help, it is something that one must do for oneself.

Here you might wonder: if justification, and therefore knowledge, is really so difficult, then why go to all the trouble? Besides, justification is not an all-or-nothing matter. How much evidence is needed before we can be said to know something? After all, if a blogger says that Heath Ledger is dead, that is at least some weak evidence that Heath Ledger is in fact dead. Do I really need stronger evidence? Why?

These are very difficult questions. The best brief answer is, “It depends.” Sometimes, if someone is just telling an entertaining story, it doesn’t matter at all whether it’s true or not. So it doesn’t matter that you know the details of the story; if the story entertains, it has done its job. I am sure that celebrity trivia is similar: it doesn’t matter whether the latest gossip in the Weekly World News about Britney Spears is true, it’s just entertaining to read. But there are many other subjects that matter a lot more. Here are two: global warming and immigration reform. Well, I certainly can’t presume to tell you how much evidence you need for your positions on these issues, before you can claim to have knowledge. Being a skeptic, I would actually say that we can’t have knowledge about such complex issues, or at least, not very certain knowledge. But I would say that it is still important to get as much knowledge as possible about these issues. Why? Quite simply because a lot is riding on our getting the correct answers, and the more that we study issues, and justify our beliefs, the more likely our beliefs are to be correct.

To passively absorb information from the Internet, without caring about whether we have good reasons for what we believe, is really to roll the dice. Like all gambling, this is pleasant and self-indulgent. But if the luck doesn’t go your way, it can come back to bite you.

Knowledge matters, and as wonderful a tool for knowledge as the Internet can be, it can also devalue knowledge. It does so, I’ve said, by making passive absorption of information seem more pleasant than the hard work of justifying beliefs, and also by presenting us with so much unedited, low-quality information that we cannot absorb it as carefully as we would like. But there is another way that the Internet devalues knowledge: by encouraging anonymity. So here’s a bit about that.

Knowledge and anonymity

We get much of our knowledge from other people. Of course, we pick some things up directly from conversation, or speeches like this one. We also read books, newspapers, and magazines; we watch informational television programs; and we watch films. In short, we get knowledge either directly from other people, or indirectly, through various media.

Now, the Internet is a different sort of knowledge source. The Internet is very different, importantly different, from both face-to-face conversation and from the traditional media. Let’s talk about that.

The Internet has been called, again, a giant conversation. But it’s a very unusual conversation, if so. For one thing, it’s not a face-to-face conversation. We virtually never have the sort of “video telephone” conversations that the old science fiction stories described. In fact, on many online knowledge websites, we often have no names, pictures, or any information at all, about the people that we converse or work with online. Like the dog in the famous New Yorker cartoon said, “On the Internet, nobody knows you’re a dog.”

In the three-dimensional online virtual world, Second Life, there is an elaborate system in which you can choose the precise physical characteristics for the person you are online—your “avatar.” Not surprisingly, in Second Life, there are a lot more beautiful and striking-looking people than there are in “First Life”—real life. This practice of make-believe is very self-conscious, and many academic papers have been written about how “identity” is “constructed” online in general.

When I went to make an avatar for myself for Second Life a few years ago, I was pretty uncomfortable representing myself as anything other than what I am. So I actually made an avatar that looks like me. (I didn’t really get it right.) I’ve always been personally uncomfortable representing myself online in any other way than how I really am. But I realize that I am unusual in this regard. Obviously, privacy matters.

Now, think of this. People who care very much about getting their facts right generally consult authoritative sources; they don’t usually get their knowledge from casual conversation with friends and relatives. But at least, when we do get knowledge from a friend or relative, we have some idea of how reliable they are. Maybe you have an eccentric acquaintance, for instance, who is a conspiracy theorist, and he doesn’t spend a lot of time considering the merits of his sources, or the plausibility of their claims. Let’s say you also know that he barely got through high school and basically doesn’t care what the mainstream media or college professors say. Your acquaintance may have many fascinating factoids and interesting stories, but probably, you aren’t going to take what he says very seriously.

But imagine if you were chatting online about politics or UFOs, or other weird stuff, with someone you didn’t know was actually your acquaintance. You might actually take him more seriously in that case. You might take his bizarre claims somewhat more seriously. I don’t mean that you would simply believe them—of course you wouldn’t—but you would not have any specific reasons to discount them, as you would if you knew you were talking to your acquaintance. Your only positive reason to discount the claims would be: I don’t know this person, this person is anonymous. But you know that there can be brilliant and reliable people anonymous online, as well as thoroughly unreliable people.

Well, I think many of us would actually trust an anonymous person more than we would trust our more eccentric acquaintances. Now don’t get me wrong, I don’t mean to accuse anyone of being a dupe. Of course, we are able to spot really daft stuff no matter who it comes from. But without knowing who a person is, we are operating without a basic bit of information that we are used to having, in evaluating what people tell us face-to-face. If we lack any information at all about how reliable a source is, we will not simply conclude that the source is completely unreliable; we will often give the person the benefit of the doubt. And that is sometimes more respect than we would give the person if we knew a few basic facts about him or her.

More generally, there is a common attitude online that it is not supposed to matter, in fact, who you are. We are all perfectly equal in many online communities, except for what we say or do in those communities. Who we are offline is not supposed to matter. But it does matter, when it comes to evaluating what people say about offline topics, like science and politics. The more time we spend in the Internet’s egalitarian communities, the more contempt we might ultimately have for information about a person’s real-world credibility. The very notion of personal credibility, or reliability, is ultimately under attack, I think. On a certain utopian view, no one should be held up as an expert, and no one should be dismissed as a crackpot. All views, from all people, about all subjects, should be considered with equal respect.

Danger, Will Robinson! Personal credibility is a universal notion; it can be found in all societies and throughout recorded history. There is a good reason that it is universal, as well: knowledge of a person’s credibility, or lack thereof, is a great time-saver. If you know that someone knows a lot about a subject, then that person is, in fact, more likely to be correct than some random person. Now, the expert’s opinion cannot take the place of thought on your part; usually, you probably should not simply adopt the expert’s opinion. It is rarely that simple. But that doesn’t mean the information about personal credibility is irrelevant or useless.

Two ideas for a solution

So far, I have mainly been criticizing the Internet, which you might find it odd for me to do. After all, I work online.

I don’t think that the Internet is an unmitigated bad influence. I won’t bore you by listing all the great things there are about the Internet, like being able to get detailed information about every episode of Star Trek, without leaving home, at 3 AM. Besides, I have only focused on a small number of problems, and I don’t think they are necessarily Earth-shatteringly huge problems, either. But they are problems, and I think we can do a little bit to help solve them, or at least mitigate them.

First, we can make a role for experts in Internet communities. Of course, make the role so that does not conflict with what makes the community work. Don’t simply put all the reins of authority in the hands of your experts; doing that would ensure that the project remains a project by and for experts, and of relatively little broader impact. But give them the authority to approve content, for example, or to post reviews, or other modest but useful tasks.

My hope is that, when the general public work under the “bottom up” guidance of experts, this will have some good effects. I think the content such a community might produce would be more reliable than the run of the mill on the Internet. I would also hope that the content itself will be more conducive to seeking knowledge instead of mere information, simply by modelling good reasoning and research.

I do worry, though, that if expert-reviewed information online were to become the norm, then people might be more likely to turn off their critical faculties.

Second, we can create new communities, in which real names and identities are expected, and we can reward people in old communities for using their real names and identities. This is something that Amazon.com has done, for example, with its “real name” feature on product reviews. If contributors are identified, we could use the same sort of methods to evaluate what they say online, that we would use if we were to run into them on the street.

I began by laying out a general problem: superabundance of information online is devaluing knowledge. I don’t know if we can really solve this problem, but the two suggestions I just made might go a little way to making it a little better. If we include a modest role for experts in more of our Internet communities, we’ll have better information to begin with, and better role models. Moreover, if we identify the sources of our information, we will be in a better position to evaluate it.

The New Politics of Knowledge

Speech delivered at the Jefferson Society, University of Virginia, Charlottesville, Virginia, November 9, 2007, and at the Institute of European Affairs, Dublin, Ireland, September 28, 2007, as the inaugural talk for the IEA's "Our Digital Futures" program.

I want to begin by asking a question that might strike you as perhaps a little absurd. The question is, "Why haven't governments tried to regulate online communities more?" To be sure, there have been instances where governments have stepped in. For instance, in January of last year in Germany, the father of a deceased computer hacker used the German court system to try to have an article about his son removed from the German Wikipedia. As a result, wikipedia.de actually went offline for a brief period. It's come back online, of course, and in fact the article in question is still up.

Here's another example. In May of last year, attorneys general from eight U.S. states demanded that MySpace turn over the names of registered sex offenders lurking on the website, which as you probably know is heavily frequented by teenagers. The website deleted pages of some 7,000 registered sex offenders. And the following July, they said that in fact some 29,000 registered sex offenders had accounts, which were subsequently deleted.

Those are just a few examples. But we can make some generalizations. The Internet is famously full of outrageously false, defamatory, and offensive information, and is said to be a haven for criminal activity. This leads back to the question I asked earlier: why haven't governments tried to regulate online communities even more than they have?

We might well find this question a little absurd, especially if we champion the liberal ideals that form the foundation of Western civil society. Indeed, no doubt one reason is our widespread commitment to freedom of speech. But consider another possible reason—one that, I think, is very interesting.

Governments, and everyone else, implicitly recognize that social groups, however new and different, have their own interests and are usually capable of regulating themselves. It is a truly striking thing that people come together from across the globe and, out of their freely donated labor and strings of electrons, form a powerful new corporate body. When they do so—as I have repeatedly observed—they develop a sense of themselves as a group, in which they invest some time and can take some pride, and which they govern by rules.

In fact, these groups are a new kind of political entity, the birth of which our generation has been privileged to witness. Such groups are not like supra-national organizations, like the United Nations; nor are they like international aid organizations, like Doctors Without Borders; nor are they quite like international scientific groups, like the Intergovernmental Panel on Climate Change. The existence and primary activity of these online communities is all online. Their membership is self-selecting, international, and connected online in real time. This makes it possible for enormous numbers and varieties of groups to arise, of arbitrary size and arbitrary nationality, to achieve arbitrary purposes. They essentially make up a new kind of political community, a cyber-polity if you will, and so there is a presumption that they can regulate themselves. Government steps in, as in the case of MySpace, only when they cannot regulate themselves responsibly.

The idea that online communities are a kind of polity is, I think, very suggestive and fruitful. I want to talk in particular about how online communities, considered as polities, are engaged in a certain new kind of politics—a politics of knowledge. Let me explain what I mean by this.

Speaking of a "politics of knowledge," I assume that what passes for knowledge, or what we in some sense take ourselves to know as a society, is determined by those who have authority or power of a sort. You don't of course have to like this situation, and you might disagree with the authorities, or scoff at their authority in some cases. Nevertheless, when for example professors at the University of Virginia say that something is well known and not seriously doubted by anyone who knows about the subject, those professors are in effect establishing what "we all know," or what we as a society take ourselves to know. Since those professors, and many others, speak from a position of authority about knowledge—a powerful force in society—surely it makes some sense to speak of a politics of knowledge. I just hope you won't understand me to be saying that what really is known, in fact, is determined by whoever happens to be in authority. I'm no relativist, and I think the authorities can be, and frequently are, wrong.

If we talk about a politics of knowledge, and we take the analogy with politics seriously, then we assume that there is a sort of hierarchy of authority, with authority in matters of knowledge emanating from some agency that is "sovereign." In short, if we put stock in the notion of the politics of knowledge, then we're saying that, when it comes to knowing stuff, some people are at the top of the heap.

Our new online communities—our cyber-polities—are increasingly influential forces, when it comes to the politics of knowledge. When Wikipedia speaks, like it or not, people listen. So in this talk I want to discuss in particular something I call the new politics of knowledge. Any talk of a new politics of knowledge raises questions about what agency is sovereign. Well, it is often said that in the brave new world of online communities, everyone is in charge. Time Magazine's "Man of the Year" is, by practice, usually some influential political figure. When its "Person of the Year" last year was "You," Time didn't break its practice. Time was rightly claiming that, through Internet communities we are all newly empowered. In the new politics of knowledge, we can all, through blogs, wikis, and many other venues, compete with real experts for epistemic authority—for power over what is considered to be known.

If this sounds like a political revolution, that's because it is. It is frequently described as a democratic revolution. So what I'm going to do in the rest of this talk is examine exactly what sense in which the new cyber-polities, like Wikipedia, do indeed represent a sort of democratic revolution. This discussion will have the interesting result that we should be more concerned than we might already be about the internal governance of Internet communities—because that internal governance has real-world effects. And I will conclude by making some recommendations for how cyber-polities should be internally governed.

As a philosopher, I find myself impelled to ask: what exactly is democratic about the so-called Internet revolution?

Democracy in one very basic sense means that sovereignty rests ultimately with the people, that is, with all of us. Bearing that in mind, the new Internet revolution might be democratic, I think, both in a narrow sense and in a broad sense. The narrow sense concerns project governance: the new content production systems are themselves governed ultimately by the participants, and for that reason can be called democratic. In the broad sense, the Internet revolution gives everyone "a voice" which formerly many did not have, a stake in determining "what is known" not just for a narrow website or Internet practice, but for society as a whole. To draw the distinction by analogy, we might say that each online community has a domestic policy, about its own internal affairs, and a foreign policy, through which it manages its influence on the world at large.

Now, I'd like to point something out that you might not immediately notice. It is that the broad sense depends in a certain way on the narrow sense. The contributors are ultimately sovereign in various Internet projects, and that is precisely why they are able to have their newfound broader influence over society. Let's take Digg.com as an example. This is a website that allows people to post any link, and then others vote, a simple up or down, on whether they "digg" the link. It's one person, one vote. Of course, no one checks anybody's credentials on Digg. The highest-voted links are placed most prominently on the website. So the importance of a Web article, and presumably whatever the article has to say, is determined democratically, at least as far as the Digg community goes. But Digg's influence goes beyond its own community. A relatively obscure story can become important by being highly rated on Digg. In this way, all those people voting on Digg—and these can be as expert as you hope, or as uneducated, ignorant, biased, immature, and foolish as you fear—they can wield a power to highlight different news stories, a power hitherto usually reserved only to professional journalists.

Similarly, Wikipedia articles are now well-known for being the #1 Google search result for many popular searches. Any website with that much reach is, like it or not, very influential. That is, in effect, practical epistemic authority. That is real authority, given to anyone who has the time and patience to work on Wikipedia and do the hand-to-hand battle necessary to get your edits to "stick" in Wikipedia articles. That power, to define what is known about a general topic, was formerly reserved only to the professional intellectuals who wrote and edited encyclopedias, and more broadly to experts generally speaking. And again, of course, no one checks anybody's credentials before they get on Wikipedia. So amateurs are to some extent displacing experts, in the new politics of knowledge.

So that's why we call the Internet revolution democratic. But this needs some qualification. There is one fundamental reason that we describe as "democratic" such websites as Digg, Wikipedia, MySpace, YouTube, and all the rest, and that is that anyone can, virtually without restriction, go to the website and get involved. This, however, is only to say that they have a certain benchmark level of "user empowerment," which we might call the "right to contribute." But frequently, a large variety of governance structures are superimposed upon this basic "right to contribute." While the content is generally determined by largely self-governing contributors, some policies and decisions are left in the hands of the website owners, like Slashdot and YouTube, who are officially answerable to no one else within the project. Granted, if these privileged persons anger their contributors, the contributors can vote with their feet—and this has happened on numerous occasions. And in some cases, such as Wikipedia, the community is almost completely self-governing. Still, we probably should qualify claims about the democratic nature of cyber-polities: just because there is a basic right to contribute, it does not follow that there will also be an equal right to determine the project's internal governance.

So, as I said before, the Internet revolution is democratic in the broad sense because it is democratic, however qualifiedly, in the narrow sense. In other words, internal Web project governance bears directly on real-world political influence. But how closely connected are Web community politics and real-world influence?

Consider Wikipedia again—and I think this is particularly interesting. If you've followed the news about Wikipedia at all in the last few years, you have might noticed that when they make larger changes to their policy, it is no longer of interest just to their contributors. It is of interest to the rest of the world, too. It gets reported on. Two recent news items illustrate this very well.

First item. A few months ago, a student posted a website, called the WikiScanner, that allows people to look up government agencies and corporations to see just who has been editing which Wikipedia articles. This was fairly big news—all around the world. I was asked to comment about the story by reporters in Canada and Australia. Journalists think it's absolutely fascinating that someone from a politician's office made a certain edit to an article about that politician, or that a corporation's computers were used to remove criticisms about the corporation. At the same time, reporters and others observe that Wikipedia's anonymity has allowed people to engage in such PR fiddling with impunity. And that is the interesting internal policy point: anyone can contribute to Wikipedia without identifying him- or herself. You can even mask your IP address, which those political aids and corporation employees should have done; all they had to do was make up some random username, which one can still do without giving Wikipedia an e-mail address, and then the WikiScanner couldn't track the IP address. Nobody who was signed in was caught by the WikiScanner. Anyway, it was an internal policy that has had some very interesting external ramifications.

Second item. It was reported recently by the London Times that the German Wikipedia would be changing its editing system. In the future, all edits by unregistered and newer contributors will have to be approved by the older contributors before they can appear on the website. In fact, this was old news—the system described has been under development for well over a year, and it still hasn't been put into use. Nevertheless, it has been touted as a very big concession on the part of Wikipedia. It's said now that Wikipedia has a role for "trusted editors" on the website, but this is incorrect; it has a role only for people who have been in the system for a while, and these can be very untrustworthy indeed. However unlikely this is to have any significant effect, it was still touted as important news. And again, what was touted as big news was a change in internal policy, the policy about how the wiki can be edited by newer and anonymous contributors. This is supposed to be important, because it might help make Wikipedia a more responsible global citizen.

In general, it is becoming increasingly clear that the "domestic policy," so to speak, of cyber-polities is closely connected with their real-world impact. Wikipedia isn't the only example I might give. Here's another—although in this case, the effect is economic, not epistemic. There is an amazingly huge website, called craigslist, which lists, they say, over 12 million new classified ads every month. This website has proven to be a real thorn in the side of local newspapers, which depend on revenue from ads. Increasingly, people are posting their classified ads in craigslist instead of in their local newspapers. This is the effect of a policy, an internal policy, that anyone can post an ad for free, except for employment ads in certain markets. What might have originally seemed to be an optional feature of a small Web community has turned out, in fact, to cost jobs at newspapers.

But let's get back to the politics of knowledge. In the intellectual sphere, I think the full power of collaboration and aggregation has yet to be demonstrated. Try to imagine Wikipedia done right—not just enormous, but credible and well-written. If this sounds impossible to believe, consider that just a few years ago, Wikipedia itself, a reasonably useful general encyclopedia with over two millions articles in English, would have sounded equally impossible to believe. I can tell you that, when Wikipedia was first starting out, there were many people who sneered that we didn't have a chance.

Let me describe briefly my new project, which is relevant here. It is called the Citizendium, or the Citizens' Compendium. It is a non-profit, free wiki encyclopedia that invites contributions from the general public—and to that extent it's like Wikipedia. There are three very important differences, however. First, we require the use of real names and do not allow anonymous contribution; we also require contributors to submit at least a brief biography. So we all know who we're actually working with. Second, we distinguish between rank-and-file authors, which do not require any special qualifications, and editors, who must demonstrate expertise in a field; our editors may approve articles, and they may make decisions about content in their areas of expertise. Still, they work side-by-side with authors on the wiki. Nobody assigns anybody any work; it's still very much a bottom-up process. Third, we are a rather more mature community. All contributors must sign onto a sort of social contract, which states the rules of the community; we expect people to behave professionally; and we have people called "constables" who are actually willing to enforce our rules by kicking out troublemakers.

So how is the project going? We started a pilot project just over a year ago, and in that time we created 3,500 articles, and we have over 2,000 authors and well over 200 expert editors on board. We also have more words than Wikipedia did after its first year—our average article is six times as long as the average Wikipedia article after its first year. Our pace of article production has accelerated—it has doubled in the past 100 days or so and tripled since last January. And we are pretty much free of vandalism, and I think our articles are pretty high-quality for such a wide-open project. The project is doing rather well, and I think that we are probably, with continued development, poised to replicate Wikipedia's sort of growth. We too could have a million articles in under ten years.

Well, imagine that the Citizendium had a million articles, together with loads of ancillary reference material such as images, tables, tutorials, and so forth—all free, credible, and managed by experts. The sort of influence that such a website would wield would, I think, far outweigh Wikipedia's. The one thing that really holds Wikipedia back, from the end user's perspective, is its reliability. So suppose there were a similar website that solved that problem.

If you ask me, this is somewhat of a frightening prospect. After all, already, far too many students and even members of the general public treat Wikipedia as if it were reliable. Already, for far too many students, Wikipedia is their only source of reference information. If humanity were to produce a similarly giant encyclopedia that were really reliable, you can just imagine how it would probably be received by the general public. It would become, essentially, the world's textbook and omnipresent reference library. There would be a general presumption that what it says is correct, and that if anyone asserts something in contradiction to it, they would have to explain in as much detail as they would have to do if they contradicted the Encyclopedia Britannica today. Sure, a good encyclopedia can be wrong; but it usually isn't. Unlike Wikipedia, it's innocent until proven guilty.

This is frightening, I say, precisely because of how powerful such a resource would be. Imagine the article about, for example, the Iraq War, after it had been written and rewritten, and checked and rechecked, by hundreds of real experts. It would no doubt be a thing of beauty, as I think the Citizendium's best articles are. But it would also be taken as the starting-point for serious conversation. What claims it makes could have real-world political ramifications, as much as, if not more than, any U.N. report. So you can easily imagine the attention given to major changes of policy, or to internal rulings on controversial cases in the project. Again: the internal policymaking for a truly successful collaborative reference project would have major external consequences.

We don't want governments to take over or closely regulate collaborative projects, but if they continue to act as irresponsibly as Wikipedia has, I fear that they might attempt to do so. That is, for me, a disturbing scenario, because in a civilized, modern, liberal society—one that deeply values the freedom of speech—the authority to say what we know is one power that should not be in the hands of the government. Every government regulation of online collaborative communities is a direct threat to the sovereignty of that community, and an implicit threat to the free speech of its members.

It is, therefore, extremely important that online projects, ones with any influence, be well-governed. We want to remove every excuse governments might have for exerting their own political authority. At this point I might argue that Wikipedia's governance has failed in various ways, but the root problem is that Wikipedia is absolutely committed to anonymous contribution; this ultimately makes it impossible to enforce many rules effectively. However much oppressive bureaucracy Wikipedia layers on, it will always be possible for people to sidestep rules, simply by creating a new identity. The unreliability of Wikipedia's enforcement of its own rules, in turn, provides a deep explanation of the unreliability of its information. The pretentious mediocrities and ideologues, as well as the powerful vested interests—generally, anyone with a strong motive to make Wikipedia articles read their way—can always create new accounts if they are ousted. Wikipedia's content will remain unreliable, and it will continue to have various public scandals, because its governance is unreliable. And this, I'm afraid, opens Wikipedia up to the threat of government regulation. I wouldn't wish that on them, of course, and I don't mean to give anyone ideas.

After all, if the Citizendium's more sensible system succeeds, it will have the power to do far more damage than Wikipedia can. To get an idea of the damage Wikipedia can do, consider another example. In late 2005, John Seigenthaler, Sr., long-time editor of the American newspaper The Tennessean, was accused in a Wikipedia article of being complicit in the assassination of John F. Kennedy. Well, it was rather easy for him to protect his reputation by pointing out publicly how unreliable Wikipedia is. He simply shamed Wikipedia, and he came off looking quite good.

But imagine that Seigenthaler were accused by some better, more reliable source. Then he couldn't have gotten relief in this way; he no doubt would have had to sue. I hate the thought, but I have to concede that it is barely possible that the Citizendium could be sued for defamation. After all, the effect of defamation by a more credible source would be much more serious. Then the government might be called in, and this worries me.

As I said, my horror scenario is that the Citizendium grows up to be as influential as its potential implies, only to be overregulated by zealous governments with a weak notion of free speech. As I said at the beginning of this talk, I think cyber-polities can generally regulate themselves. But communities with poor internal governance may well incur some necessary correction by governments, if they violate copyright on a massive scale or if they permit, irresponsibly, a pattern of libel. Why should this be disturbing to me? Government intervention is perhaps all right when we are talking about child molesters on MySpace; but when we are talking about projects to sum up what is known, that is when more serious issues of free speech enter in.

You can think of government intervention in something like Wikipedia or the Citizendium as akin to government intervention in the content of academic lectures and the governance of universities. When this happens, what should be an unimpeded search for the truth risks becoming politicized and politically controlled.

But you can imagine, perhaps, a series of enormous scandals on Wikipedia that has government leaders calling for the project to be taken over by the Department of Education, or by some private entity that is nevertheless implicitly answerable to the government. Wikipedia is far from being in such a position now, but it is conceivable. The argument would go as follows:

Wikipedia is not like a university or a private club. It is open to everyone, and its content is visible around the globe, via the Internet. Therefore, it is a special kind of public trust. It is not unlike a public utility. Moreover, it has demonstrated its utter incapacity to manage itself responsibly, and this of genuine public concern. The government is obligated, therefore, to place the management of Wikipedia in the care of the government.

End of argument. Nationalization might seem hard to conceive, but it has happened quite a bit in the last century. Why couldn't it happen to something that is already a free, public trust?

As both an academic (or former academic, anyway) and as an online project organizer, the thought of this scenario bothers me greatly, and in fact I must admit that I have given it no small amount of thought in the last few years. Fear of government intrusions on what should be a fully independent enterprise is one reason that I have spent so much time in the last year working on a sensible governance framework for the Citizendium. In short, the best protection against undue government interference in open content projects is good internal governance. So let me describe the Citizendium's current governance and its future plans.

The Citizendium works now under an explicit Statement of Fundamental Policies, which calls for the adoption of a Charter, not unlike a constitution, within the next few months. The Charter will no doubt solidify the governance system we are developing right now. This system involves an Editorial Council which is responsible for content policy; a Constabulary which gets new people on board and encourages good behavior; and a Judicial Board which will handle conflict resolution and appeals. While editors will make up the bulk of our Editorial Council, both authors and editors may participate in each of these bodies. Each of these bodies will have mutually exclusive membership, to help ensure a separation of powers, and there will be some other checks and balances. In addition, I as Editor-in-Chief am head of an Executive Committee. But to set a positive precedent, before even launching the Citizendium I have committed to stepping down within two to three years, so that we have an appropriate and regular succession of leadership.

Another perhaps interesting point concerns the Editorial Council. It has actually adopted a digitized version of Robert's Rules of Order, and we have passed five resolutions using e‑mail and the wiki exclusively. Recall that contributors must agree to uphold this system, as a condition of their participation. They must also be identified by their real-world identity if they wish to participate—although we will make exceptions in truly extraordinary cases.

I think you can recognize what we are trying to build: a traditional constitutional republic, but moved online. Only time will tell, but my hope is that this nascent governance structure will help us to avoid some of the problems that have beset not just Wikipedia, but a wide variety of Web communities.

I have covered a pretty wide variety of topics in my talk. I hope you have been able to follow the thread, at least a little; I doubt I have spent all the time I would need to make everything perfectly clear. But let me sum up my main argument anyway. Online communities, I say, are political entities. As such, they can govern their own "domestic" affairs, as well have various "foreign" or external effects. And so they can be democratic insofar as their members have authority internally or externally. I've discussed mainly one kind of authority, namely epistemic authority, or the authority over what society takes to be knowledge.

Then I pointed out that the external authority a project has depends on its internal governance—and so, the more externally influential, the more important it is that we get the internal governance right. I pointed to Wikipedia as an example of a cyber-polity that is not particularly well-governed. I worried a fair bit about the fallout, in terms of government regulation, that this might incur. In part to help avoid such fallout, I have briefly sketched a governance system that the Citizendium uses, which is a traditional constitutional, representative republic—mapped online.