Is there a new geek anti-intellectualism?

Is there a new anti-intellectualism?  I mean one that is advocated by Internet geeks and some of the digerati.  I think so: more and more mavens of the Internet are coming out firmly against academic knowledge in all its forms.  This might sound outrageous to say, but it is sadly true.

Let's review the evidence.

1. The evidence

Programmers have been saying for years that it's unnecessary to get a college degree in order to be a great coder--and this has always been easy to concede.  I never would have accused them of being anti-intellectual, or even of being opposed to education, just for saying that.  It is just an interesting feature of programming as a profession--not evidence of anti-intellectualism.

In 2001, along came Wikipedia, which gave everyone equal rights to record knowledge.  This was only half of the project's original vision, as I explain in this memoir.  Originally, we were going to have some method of letting experts approve articles.  But the Slashdot geeks who came to dominate Wikipedia's early years, supported by Jimmy Wales, nixed this notion repeatedly.  The digerati cheered and said, implausibly, that experts were no longer needed, and that "crowds" were wiser than people who had devoted their lives to knowledge.  This ultimately led to a debate, now old hat, about experts versus amateurs in the mid-2000s.  There were certainly notes of anti-intellectualism in that debate.

Around the same time, some people began to criticize books as such, as an outmoded medium, and not merely because they are traditionally paper and not digital.  The Institute for the Future of the Book has been one locus of this criticism.

But nascent geek anti-intellectualism really began to come into focus around three years ago with the rise of Facebook and Twitter, when Nicholas Carr asked, "Is Google making us stupid?" in The Atlantic. More than by Carr's essay itself, I was struck by the reaction to it.  Altogether too many geeks seemed to be assume that if information glut is sapping our ability to focus, this is largely out of our control and not necessarily a bad thing.  But of course it is a bad thing, and it is in our control, as I pointed out. Moreover, focus is absolutely necessary if we are to gain knowledge.  We will be ignoramuses indeed, if we merely flow along with the digital current and do not take the time to read extended, difficult texts.

Worse still was Clay Shirky's reaction in the Britannica Blog, where he opined, "no one reads War and Peace. It’s too long, and not so interesting," and borrows a phrase from Richard Foreman in claiming, "the ‘complex, dense and “cathedral-like” structure of the highly educated and articulate personality’ is at risk."  As I observed at the time, Shirky's views entailed that Twitter-sized discourse was our historically determined fate, and that, if he were right, the Great Books and civilization itself would be at risk.  But he was not right--I hope.

At the end of 2008, Don Tapscott, author of Wikinomics, got into the act, claiming that Google makes memorization passe.  "It is enough that they know about the Battle of Hastings," Tapscott boldly claimed, "without having to memorise that it was in 1066.  [Students] can look that up and position it in history with a click on Google."

In 2010, Edge took up the question, "Is the Internet changing the way you think?" and the answers were very sobering.  Here were some extremely prominent scientists, thinkers, and writers, and all too many of them were saying again, more boldly, that the Internet was making it hard to read long pieces of writing, that books were passe, and that the Internet was essentially becoming a mental prosthesis.  We were, as one writer put it, uploading our brains to the Internet.

As usual, I did not buy the boosterism.  I was opposed to the implicit techno-determinism as well as the notion that the Internet makes learning unnecessary.  Anyone who claims that we do not need to read and memorize some facts is saying that we do not need to learn those facts.  Reading and indeed memorizing are the first, necessary steps in learning anything.

This brings us to today.  Recently, Sir Ken Robinson has got a lot of attention by speaking out--inspiringly to some, outrageously to others--saying that K-12 education needs a sea change away from "boring" academics and toward collaborative methods that foster "creativity."  At the same time, PayPal co-founder Peter Thiel sparked much discussion by claiming that there is a "higher education bubble," that is, the cost of higher education greatly exceeds its value.  This claim by itself is somewhat plausible.  But Thiel much less plausibly implies that college per se is now not recommendable for many, because it is "elitist."  With his Thiel Fellowship program he hopes to demonstrate that a college degree is not necessary for success in the field of technology.  Leave it to a 19-year-old recipient of one of these fellowships to shout boldly that "College is a waste of time."  Unsurprisingly, I disagree.

2. Geek anti-intellectualism

In the above, I have barely scratched the surface.  I haven't mentioned many other commentators, blogs, and books that have written on such subjects.  But this is enough to clarify what I mean by "geek anti-intellectualism."  Let me step back and sum up the views mentioned above:

1. Experts do not deserve any special role in declaring what is known.  Knowledge is now democratically determined, as it should be.  (Cf. this essay of mine.)

2. Books are an outmoded medium because they involve a single person speaking from authority.  In the future, information will be developed and propagated collaboratively, something like what we already do with the combination of Twitter, Facebook, blogs, Wikipedia, and various other websites.

3. The classics, being books, are also outmoded.  They are outmoded because they are often long and hard to read, so those of us raised around the distractions of technology can't be bothered to follow them; and besides, they concern foreign worlds, dominated by dead white guys with totally antiquated ideas and attitudes.  In short, they are boring and irrelevant.

4. The digitization of information means that we don't have to memorize nearly as much.  We can upload our memories to our devices and to Internet communities.  We can answer most general questions with a quick search.

5. The paragon of success is a popular website or well-used software, and for that, you just have to be a bright, creative geek.  You don't have to go to college, which is overpriced and so reserved to the elite anyway.

If you are the sort of geek who loves all things Internet uncritically, then you're probably nodding your head to these.  If so, I submit this as a new epistemological manifesto that might well sum up your views:

You don't really care about knowledge; it's not a priority.  For you, the books containing knowledge, the classics and old-fashioned scholarship summing up the best of our knowledge, the people and institutions whose purpose is to pass on knowledge--all are hopelessly antiquated.  Even your own knowledge, the contents of your mind, can be outsourced to databases built by collaborative digital communities, and the more the better.  After all, academics are boring.  A new world is coming, and you are in the vanguard.  In this world, the people who have and who value individual knowledge, especially theoretical and factual knowledge, are objects of your derision.  You have contempt for the sort of people who read books and talk about them--especially classics, the long and difficult works that were created alone by people who, once upon a time, were hailed as brilliant.  You have no special respect for anyone who is supposed to be "brilliant" or even "knowledgeable."  What you respect are those who have created stuff that many people find useful today.  Nobody cares about some Luddite scholar's ability to write a book or get an article past review by one of his peers.  This is why no decent school requires reading many classics, or books generally, anymore--books are all tl;dr for today's students.  In our new world, insofar as we individually need to know anything at all, our knowledge is practical, and best gained through projects and experience.  Practical knowledge does not come from books or hard study or any traditional school or college.  People who spend years of their lives filling up their individual minds with theoretical or factual knowledge are chumps who will probably end up working for those who skipped college to focus on more important things.

Do you find your views misrepresented?  I'm being a bit provocative, sure, but haven't I merely repeated some remarks and made a few simple extrapolations?  Of course, most geeks, even most Internet boosters, will not admit to believing all of this manifesto.  But I submit that geekdom is on a slippery slope to the anti-intellectualism it represents.

So there is no mistake, let me describe the bottom of this slippery slope more forthrightly.  You are opposed to knowledge as such. You contemptuously dismiss experts who have it; you claim that books are outmoded, including classics, which contain the most significant knowledge generated by humankind thus far; you want to memorize as little as possible, and you want to upload what you have memorized to the net as soon as possible; you don't want schools to make students memorize anything; and you discourage most people from going to college.

In short, at the bottom of the slippery slope, you seem to be opposed to knowledge wherever it occurs, in books, in experts, in institutions, even in your own mind.

But, you might say, what about Internet communities?  Isn't that a significant exception?  You might think so.  After all, how can people who love Wikipedia so much be "opposed to knowledge as such"?  Well, there is an answer to that.

It's because there is a very big difference between a statement occurring in a database and someone having, or learning, a piece of knowledge.  If all human beings died out, there would be no knowledge left even if all libraries and the whole Internet survived.  Knowledge exists only inside people's heads.  It is created not by being accessed in a database search, but by being learned and mastered.  A collection of Wikipedia articles about physics contains text; the mind of a physicist contains knowledge.

3. How big of a problem is geek anti-intellectualism?

Once upon a time, anti-intellectualism was said to be the mark of knuckle-dragging conservatives, and especially American Protestants.  Remarkably, that seems to be changing.

How serious am I in the above analysis?  And is this really a problem, or merely a quirk of geek life in the 21st century?

It's important to bear in mind what I do and do not mean when I say that some Internet geeks are anti-intellectuals.  I do not mean that they would admit that they hate knowledge or are somehow opposed to knowledge.  Almost no one can admit such a thing to himself, let alone to others.  And, of course, I  doubt I could find many geeks who would say that students should not graduate from high school without learning a significant amount of math, science, and some other subjects as well.  Moreover, however they might posture when at work on Wikipedia articles, most geeks have significant respect for the knowledge of people like Stephen Hawking or Richard Dawkins, of course.  Many geeks, too, are planning on college, are in college, or have been to college.  And so forth--for the various claims (1)-(5), while many geeks would endorse them, they could also be found contradicting them regularly as well.  So is there really anything to worry about here?

Well, yes, there is.  Attitudes are rarely all or nothing.  The more that people have these various attitudes, the more bad stuff is going to result, I think.  The more that a person really takes seriously that there is no point in reading the classics, the less likely he'll actually take a class in Greek history or early modern philosophy.  Repeat that on a mass scale, and the world becomes--no doubt already has become--a significantly poorer place, as a result of the widespread lack of analytical tools and conceptual understanding.  We can imagine a world in which the humanities are studied by only a small handful of people, because we already live in that world; just imagine the number of people getting smaller.

But isn't this just a problem just for geekdom?  Does it really matter that much if geeks are anti-intellectuals?

Well, the question is whether the trend will move on to the population at large.  One does not speak of "geek chic" these days for nothing.  The digital world is now on the cutting edge of societal evolution, and attitudes and behaviors that were once found mostly among geeks back in the 1980s and 1990s are now mainstream.  Geek anti-intellectualism can already be seen as another example.  Most of the people I've mentioned in this essay are not geeks per se, but the digerati, who are frequently non-geeks or ex-geeks who have their finger on the pulse of social movements online.  Via these digerati, we can find evidence of geek attitudes making their way into mainstream culture.  One now regularly encounters geek-inspired sentiments from business writers like Don Tapscott and education theorists like Ken Robinson--and even from the likes of Barack Obama (but not anti-intellectualism, of course).

Let's just put it this way.  If, in the next five years, some prominent person comes out with a book or high-profile essay openly attacking education or expertise or individual knowledge as such, because the Internet makes such things outmoded, and if it receives a positive reception not just from writers at CNET and Wired and the usual suspects in the blogosphere, but also serious, thoughtful consideration from Establishment sources like The New York Review of Books or Time, I'll say that geek anti-intellectualism is in full flower and has entered the mainstream.

UPDATE: I've posted a very long set of replies.

UPDATE 2: I've decided to reply below as well--very belatedly...


The value of knowledge - the anti-intellectualism problem versus the philosophers' problem

I need to complain about my fellow philosophers.  But maybe I'm confused.  Maybe some philosophers out there can set me straight, somehow.

In recent years, as my interests have turned away from encyclopedia-building and toward education, I have become increasingly interested in the whole social phenomenon of people appearing to devalue academic knowledge.  This is unfortunate enough in students, but it is disturbing among adults who shape the attitudes of children, and positively alarming among educators--precisely the people responsible for imparting knowledge.  This trend is part and parcel of anti-intellectualism--and, by the way, it has recently gotten a fresh shot in the arm from the rise of the Internet.  Let's call this the problem of anti-intellectualism.

Concern about this problem has led me to read, among other things, Susan Jacoby's pretty interesting book The Age of American Unreason.  I've been thinking of writing an essay on the topic, and making a defense of knowledge as such, and in particular, why it ought to be the centerpiece of our goal statements of education.  Education is, first and foremost, about the getting of knowledge, or improving our understanding.  Toying with this idea, I decided to look into what some of my fellow philosophers have said about it.  Philosophers frequently say that knowledge is an intrinsic good, something sought for its own sake.  But, of course, there is far more that can be said about the value of knowledge than that, even if it is an intrinsic good.

I was not too surprised to learn that a currently trendy topic in epistemology is now the value of knowledge.  But when one looks at the Stanford Encyclopedia of Philosophy article on the subject, attractively titled "The Value of Knowledge," one discovers that there is very little indeed on the problem above-described.  Instead, it is all about the relatively technical problem of why knowledge is more valuable than mere true belief.  I decided to search the page for the words "anti-intellectual" and "anti-intellectualism."  They do not occur in the article.  In fact, there is no significant discussion of "anti-intellectual" or "anti-intellectualism" anywhere in in the Stanford Encyclopedia of Philosophy.

Well, I can't say I'm surprised.  This is how all too many philosophers water down what could be truly fascinating questions: they identify some vaguely related technical issue connected to the interesting question, and then compare technical theories on the technical issue.  Now, don't get me wrong; I studied with many analytical philosophers and I strongly prefer analytical philosophy to Continental philosophy.  Moreover, the philosophers' "problem of value" is actually interesting to me.  But, sadly, the "relevance" critique does have some purchase.

Here by the way is my own current view, the view I might want to expand in an essay.  Knowledge--or more precisely, amassing a large body of knowledge, and coming to understand many different aspects of our world, personal, social, and natural, abstract and applied, theoretical and practical, historical and current, mathematical and verbal--is valuable because it improves us.  Having good writing and speaking skills makes our communication more efficient and effective.  Being able to read texts accurately makes it possible to understand instructions, evaluate arguments, and make sense of explanations.  Acquaintance with literature and psychology makes us more worldly, or able to relate smoothly to a wider variety of personalities.  History and politics make us better citizens.  Math ability has not just obvious practical consumer uses, but also allows us to make sense of the more abstract aspects of the world, which are sometimes the only way to come to an accurate, nuanced understanding of why things are as they are.  Or in other words, science.  Science, especially at the more advanced levels in which we understand not just observable facts but begin to grasp the deeper reasons for things, ultimately forms the basis for engineering marvels as well as technocrats' policy decisions, which, in massive bureaucratic states such as we have now, are widespread.  Philosophy and logic can (or should) greatly improve the clarity with which we think about the world.  Mastering all of these subjects generally improves one's ability to understand and make oneself clear on various other subjects.  Education makes it possible for us to get stuff done in a complex world.  I could go on and on, of course.  I'm pretty sure that with more thought (or research) I will be able to pull together these various disparate advantages into a few general themes.  I'm sure eventually I'll sound themes of liberal education, that education in general broadens the mind, liberates us, and so forth.

The multi-faceted ways in which knowledge quite obviously improves us are precisely why schools were invented in the first place, and why people have continued to support the institution of education vigorously.  Indeed, I submit that without reference to the virtues imparted specifically by knowledge, one cannot begin to make sense of education as an institution.  This is why I say that the purpose or goal of education is, first and foremost--regardless of whatever other goals it might have--to cause students to have knowledge, or to improve their understanding.  This is the most basic, ur-explanation of the existence of education and hence schools.

Well, I'll leave it at that for now.  I'm not ready to write the essay just yet, if I ever will be.

 


On Robinson on Education

This very striking video has been circulating, and I'm inspired to reply to it:

First, let me say that the video design is very cool.  Moreover, Sir Ken Robinson is quite an excellent public speaker.  Finally, I agree with him entirely that standardization is the source of a lot of our educational difficulties.  But much of the rest of his message is irritatingly wrong.

The typical comment made about this video is that it represents a radical new proposal for what education should look like.  But there's very little that is new about it.  Indeed, many school teachers and education professors, I'd wager, find a lot to agree with here.  Many of the progressive "reform" proposals look like this.  The problem is that they endlessly run up against the facts of reality.  And I don't mean political reality, although that's fierce enough.  I mean the reality of what education really means and what it accomplishes.

So let's try to understand a few things that Robinson is trying to argue.  He basically makes the point that the education system was designed in the 19th century, and its methodology is stuck in the 19th century.  It needs to be updated, he says.  This, by itself, is a rhetorically powerful message, and an effective way to position his proposed reforms, especially for all those people out there who pride themselves on being cutting-edge in everything.

But what exactly, according to Robinson, is educationally backward and now wrong?  Several things, all dramatically denied (and quite amusingly illustrated):

  • 1. Work hard, do well, get a college degree, and you will be rewarded with a good job.  (Our kids "don't believe that" and "they're right not to," says Sir Ken--why?  Because a college degree doesn't guarantee a good job.  I spy a fallacy.)
  • 2. The "Enlightenment view of intelligence," that real intelligence consists in the ability to do deductive reasoning and knowledge of the classics, or what (he says) we think of as "academic ability."  (I think of academic ability as far more than this.  Also, I can't recall coming across either of these as strongly advocated for in my public school education, and these have if anything become even rarer in schools.)
  • 3. There is not enough collaboration in schools.  (There sure was an annoyingly large amount of groupwork in the public schools I attended from 1973 to 1986, and now, I gather, such methods are still all the rage.  So I'm not convinced on this point.)
  • 4. Schools are too standardized: organized on factory lines, scheduled, regimented, studying compartmentalized subjects, with people of the same ages graduating at the same time.  (Here is where I agree with him--except for his complaint about the separation into specialized subjects.)

There are three main points in the rest of his argument, as follows.  First, the modern student is constantly being bombarded with stimulation, from computers, television, handhelds, and so forth.  This can be expected to reduce their level of attention.  But, second, this leads to a ridiculous over-diagnosis of and over-medication for ADHD.  This is supposed to be an epidemic, but it is really a fictitious epidemic.  The problem at base is that kids are made to look at "boring stuff" (Sir Ken actually uses that phrase, to cheers from teenagers on YouTube), which they simply can't do unless they are "anesthetized" with ADHD drugs.  Third, an important element of intelligence is "divergent thinking," or the ability to think of different interpretations of questions and produce many different answers.  Schooling, for reasons above stated, gradually kills this ability off, which is much stronger in kindergartners.  Our creativity is educated out of us.

What should we do instead?  At least in this speech, Robinson is annoyingly cryptic.  For instance, he says: "We should be waking them up to what is inside themselves" instead of "anesthetizing them."  (OK, so how do we do that?  What does this even mean?)  Also, we should get rid of the distinction between academic and non-academic, and between abstract, theoretical, and vocational subjects.  (But...these are reasonably coherent and useful distinctions.  You can't get rid of the distinction, in practice, without getting one of the things distinguished.  I'm guessing Sir Ken is all for getting rid of the "boring stuff," which I suppose would include the allegedly soul-killing "academic" stuff.)  Also: "Most great learning happens in groups."  (Not in my experience.  I associate group learning with precisely the standardization and anti-creativity groupthink that Robinson was bemoaning earlier.  And supposing he's right and I'm wrong: how, exactly, should we harness groups to make "great learning" happen?)

Sir Ken is a charming character, but he is mostly wrong.  I think his views, far from being especially novel or radical, reflect the mainstream of educational theory.  This pattern of educational theorizing has been going on for generations now, and one of the things that people say again and again, ironically, is how innovative and cutting-edge they are when they reheat such stuff for the umpteenth time.

But, you might ask, if Sir Ken's theorizing is mostly old hat and mainstream among educational theorists, why aren't we living out an educational utopia of self-realizing, non-academic, collaborative kids who only go to college when they really want to?  Because, of course, the theory is impractical.  It is poetic justice that somebody who thinks that we should jettison the distinction between theory and practice would be impaled on that very distinction.  Another way to put it, however, is that it is incoherent--in some cases, with itself, and in some cases, with common but often unmentioned beliefs, also known as common sense.

I'm not sure that Sir Ken mentioned any actual academic subjects such as history or mathematics.  But if you are going to castigate academics as "boring stuff," then let's get clear: you are opposing history, mathematics, science, classical literature (OK, so that was mentioned), and various other subjects.  In the same vein, when clever would-be educational reformers say that we need to get rid of the orientation around memorizing facts, they rarely specify which facts they think students shouldn't learn.  As Sir Ken himself says in this talk, he doesn't want to lower standards--of course not, that's just obvious.  But if, in the limited amount of time we have to teach our children before they're all grown up, we start emphasizing vocational subjects, then we're talking about teaching less history, less mathematics, less science, etc.  De facto, standards regarding the amount of such learning are lowered.  You can't really argue with this; it's a hard, cold fact.  The practical consequence of less emphasis on academics, on "boring stuff," is to de-emphasize teaching knowledge that, it so happens, society in general naturally prizes.  You set yourself up in opposition to school boards and parents who understandably want to raise standards so that U.S. schools remain competitive with other countries.  But, you say, what's wrong with that?  They are simply mistaken about what our educational goals should be and so, sure, you do oppose them.  Perhaps; but, again, let's get clear: are you really in favor of reducing the amount of math and history that is learned in schools?  I'm sure there are some people who follow the consequences and say "yes" to this.  But most people are like Sir Ken, who says, smugly and cracking a joke, that he, too, is in favor of raising standards.  He, like so many educational theorists, wants to have his cake and eat it too: he doesn't want to teach so much "boring stuff" in school.  But he also doesn't want to lower standards.  He no doubt wants our kids to do just as well in math and science...just without all that studying, which unrealistically requires ADHD kids to pay attention.

Similarly, just as the U.S. is in the process of adopting national education standards--i.e., taking a bold leap toward ever-greater standardization--he states that he firmly opposes standardization.  Well, I do too, which is why I'm homeschooling my boys.  But in the same speech he says that we learn best by learning in groups, collaboratively.  It is hard (not impossible, but hard) to do that very much apart from a school system.  And what is the politically practical way to create a school system without the sort of standardization Robinson dislikes?  I doubt there is any.  The government cannot and should not do anything without being accountable to the people; and how can it be accountable without adopting some reasonable rules and standards against which its performance is measured?  Besides, quite famously, the U.S. educational system still (as of this writing) lacks a national educational curriculum, and in that respect is remarkably less standardized than other countries.  The point is that as long as government is in charge of education, there are natural pressures toward the standardization that Robinson--and so many, many other staunch supporters of public education and collaborative learning--bemoans.  Again, we can't have our cake and eat it too.  If we want public schools in modern democracies, we must face up to the fact that the quite proper requirements of democratic accountability will make our public school systems greatly standardized.

Not all students should get on the academic track and go to college--opines both Professor Robinson, who earned his Ph.D. from the University of London, and a passel of other highly-degreed academic theorists.  Well, of course this is true, in general.  There are still many jobs that do not (and should not) require a college degree, and there will always be people who, for whatever reasons, won't be competitive enough either as students or in the job market to be very competitive in getting jobs that do require college degrees.  It would simply be cruel, and economically illiterate, to advise everyone to try to get a college degree.  This should be obvious to anybody who has been on the "front lines" of teaching the sort of college freshmen who quickly drop out because they should never have been admitted in the first place.  So, given that this is a truism (at least under present circumstances), why does Robinson, like so many others, feel it necessary to attack a culture in which many people are getting college degrees?  What, exactly, is the point of doing that?

If I were being very charitable, I'd say that Sir Ken simply hated the thought of people making poor life choices, being overambitious, and paying for it in the form of high debt and dashed hopes.  But, having heard his speech, I think another explanation is more likely.  His contempt for the ladder to college comes in the context of a complaint that pushing education on children "alienates" them.  He says that he was taught as a school boy that by working hard, doing well, and going to college, he'd get a good job.  (It worked out that way for him, now didn't it?)  But "our kids don't believe that," he says.  And yet "our kids" are still going to college in record numbers, so if they don't believe it, they're acting irrationally.  Anyway, he seems to be saying that the reason you shouldn't go to college is simply that the academic track features "boring stuff" which will snuff out your creativity.  Yes, as amazing as it might sound, that is what he says in his speech.  He doesn't put it in so many words, but that's essentially what he says.

While Sir Ken and much of his head-nodding audience no doubt think that he, and they, are being wonderfully egalitarian and inclusive when they say and believe such things, really the opposite is true.

In the 21st century, just as much as in the 19th, a solid academic education, a liberal education, which features training in critical thinking and classical literature and all the rest of it, gives us an opportunity to improve our minds.  If you come out against academic education in the sense of liberal education, you really have to explain why you aren't also coming out against keeping a lot of people relatively stupid.  Sir Ken seems to have forgotten that a good, indeed, academic education changes minds; it liberates them, which is where we get the phrase "liberal education" from.  It needn't kill creativity, it can just as easily channel it and strengthen it.  But more importantly--because understanding is more important than creativity, I will be so bold as to say--it develops our understanding of ourselves, our society, and the universe we live in.  Having such an understanding does not merely make us much more employable, which it certainly does; and of course being more equal in this respect was indeed the reason for the egalitarian ideal of universal public education. But it also tends to make our minds and our lives so to speak broader or larger. To pretend that liberal education does not have this effect, to dismiss academic education as an artifact of the 19th century, is to ignore precisely the sort of training that made Sir Ken the speaker and writer that he is today.

Robinson would, I think, have a reply to this.  In his speech he says it is wrong to equate "smart" with "academic" and "non-smart" with "non-academic."  So I seem to be trading on that outdated equation.  This sounds very egalitarian, and especially nice when he says that many people who are brilliant are convinced they are not, merely because they are not "book smart"--a lovely, gracious sentiment.  After all, everybody knows smart and wise people who have relatively little book learning--and people full of book learning who lack wisdom or good sense.  So, sure, that's true; education has its failures, like any institution, and sometimes it isn't really necessary at all. But whoever denied these things?  It hardly follows that academic education doesn't tend to make people smart.  Of course it does; if it didn't, people wouldn't value such education.  When people go to school for a long time, and work hard and conscientiously, they tend to become better readers, better writers, better at math, and in general, possessed of better minds, than they had before, or than they would have in the absence of their education.  And this is, of course, ultimately the reason why people get an academic education.  I know it's rather obvious to say this, but it is, after all, an important bit of common sense that Robinson is ignoring.


25 Replies to Maria Bustillos

In a recent essay, in The Awl ("Wikipedia and the Death of the Expert"), Maria Bustillos commits a whole series of fallacies or plain mistakes and, unsurprisingly, comes to some quite wrong conclusions.  I don't have time to write anything like an essay in response, but I will offer up the following clues for Ms. Bustillos and those who are inclined to nod approvingly with her essay:

1. First, may I point out that not everybody buys that Marshall McLuhan was all that.

2. The fact that Nature stood by its research report (which was not a peer-reviewed study) means nothing whatsoever.  If you'll actually read it and apply some scholarly or scientific standards, Britannica's response was devastating, and Nature's reply thereto was quite lame.

3. There has not yet been anything approaching a credible survey of the quality of Wikipedia's articles (at least, not to my knowledge).  Nobody has shown, in studies taken individually or in aggregate, that Wikipedia's articles are even nearly as reliable as a decent encyclopedia.

4. If you ask pretty much anybody in the humanities, you will learn that the general impression that people have about Wikipedia articles on these subjects is that they are appalling and not getting any better.

5. The "bogglingly complex and well-staffed system for dealing with errors and disputes on Wikipedia" is a pretentious yet brain-dead mob best likened to the boys of The Lord of the Flies.

6. It is trivial and glib to say that "Wikipedia is not perfect, but then no encyclopedia is perfect."  You might as well say that the Sistine Chapel is not perfect.  Yeah, that's true.

7. It is not, in fact, terribly significant that users can "look under the hood" of Wikipedia.  Except for Wikipedia's denizens and those unfortunate enough to caught in the crosshairs of some zealous Wikipedians using the system to commit libel without repercussion, nobody really cares what goes on on Wikipedia's talk pages.

8. When it comes to actually controversial material, the only time that there is an "attempt to strike a fair balance of views" in Wikipedia-land is when two camps with approximately equal pull in the system line up on either sides of an issue.  Otherwise, the Wikipedians with the greatest pull declare their view as "the neutral point of view."  It wasn't always this way, but it has become that way all too often.

9. I too am opposed to experts exercising unwarranted authority.  But there is an enormous number of possibilities between a world dominated by unaccountable whimsical expert opinion and a world without any experts at all.  Failing to acknowledge this is just sloppiness.

10. If you thought that that Wikipedia somehow meant the end of expertise, you'd be quite wrong.  I wrote an essay about that in Episteme. (Moreover, in writing this, I was criticized for proving something obvious.)

11. The fact that Marshall McLuhan said stuff that presciently supported Wikipedia's more questionable epistemic underpinnings is not actually very impressive.

12. Jaron Lanier has a lot of very solid insight, and it is merely puzzling to dismiss him as a "snob" who believes in "individual genius and creativity."  There's quite a bit more to Lanier and "Digital Maoism" than that.  Besides, are individual genius and creativity now passe?  Hardly.

13. Clay Shirky isn't all that, either.

14. Being "post-linear" and "post-fact" is not "thrilling" or profound.  It's merely annoying and tiresome.

15. Since when did the Britannica somehow stand for guarantees of truth?  Whoever thought so?

16. There are, of course, vast realms between the extremes of "knowledge handed down by divine inspiration" and some dodgy "post-fact society."

17. The same society can't both be "post-fact" and thrive on "knowledge [that] is produced and constructed by argument," Shirky notwithstanding.  Arguments aim at truth, i.e., to be fact-stating, and truth is a requirement of knowledge.  You can't make sense of the virtues of dialectical knowledge-production without a robust notion of truth.

18. Anybody who talks glowingly about the elimination of facts, or any such thing, simply wants the world to be safe for the propagation of his ideology by familiar, manipulable, but ultimately irrational social forces.  No true liberal can be in favor of a society in which there are no generally-accepted, objective standards of truth, because then only illiberal forces will dominate discourse.

19. Expert opinion is devalued on Wikipedia, granted-and maybe also on talk radio and its TV spin-offs, and some Internet conversations.  But now, where else in society has it been significantly devalued?

20. What does being a realist about expertise--i.e., one who believes it does exist, who believes that an expert's opinion is, on balance, more likely to be true than mine in areas of his expertise--have to do with individualism?  Surely it's more the romantic individualists who want to be unfettered by the requirements of reason, including the scientific methods and careful reasoning of experts, who are naturally inclined to devalue expertise per se.

21. Wikipedia does not in any plausible way stand for a brave new world in which competing arguments hold sway over some (fictional) monolithic expert opinion.  There have always been competing expert views; Wikipedia merely, sometimes, expresses those competing expert views when, from some professors, you might hear only one side.  Sometimes, Wikipedia doesn't even do that, because the easy politicization of collaborative text written without enforceable rules makes neutrality an elusive ideal.

22. Um, we have had the Internet for more than 20 years.

23. The writing down of knowledge is more participatory now, and that's a fine thing (or can be).  But knowledge itself is, always has been, and always will be an individual affair.  The recording of things that people take themselves to know, in Wikipedia or elsewhere, and findable via Google, does not magically transfer the epistemic fact of knowledge from the recorder even to those who happen to find the text, much less to all readers online.  Knowledge itself is considerably more difficult than that.

24. Ours is an individualistic age?  Don't make me laugh.  People who actually think for themselves--you know, real individualists--appear to me to be as rare as they ever have been.  It is a delight to meet the few who are out there, and one of the better features of the Internet is that it makes it easier to find them.  The West might be largely capitalist, but that doesn't stop us from being conformist, as any high school student could tell you.

25. The real world is considerably more complex than your narrative.


Looong interview with me by Dan Schneider in Cosmoetica

Off and on, for the last 2.5 years, I have been answering questions from poet and critic Dan Schneider, who has conducted a series of long, interesting interviews.  My interview, posted a few hours ago, is #27 in the series; Schneider himself gives the interview four stars (out of five).  That should tell you something about the Schneider: he's the kind of guy who asks questions that take hours and hours to answer, and then has the audacity to rate the answers.  The questions cover my life, Wikipedia, Citizendium, philosophy, and my reactions to various idiosyncratic puzzles that Schneider has come up with.  If you were to ask why I agreed to do an interview that ended up being 40,000 words long, without any compensation or anything, I'd say that I didn't know it was going to be that long, and Dan Schneider was very persistent.  And maybe this reveals just how vain I really am.


A common error of school lessons, or, why I'm homeschooling

Here is one reason why I'm homeschooling, and why I would probably never send my children to a school--even most private schools.

I was looking over some instructional material recently (something I do often these days).  It was a sample curriculum for teachers, explaining how, in one lesson, they should teach Kindergartners the principle that we say aloud one word for each word we see written down.  Immediately I had the thought that this would be a pointless waste of time for most children.  Many children would have already gotten that lesson, and it would be boring to go over it; and if any child hadn't gotten it, it seems unlikely that any amount of time trying to teach it would likely be wasted, because the principle in question is highly abstract.

Indeed, because the principle is so abstract, both categories of children--those who understood the principle implicitly, and those who hadn't--would probably be puzzled by the attempt to explain something so abstract explicitly, and then during lesson time, they would instead focus on other aspects of the words and sentences discussed.  In other words, they would simply take what was supposed to be careful, by-the-hand explanation of some features of letters, words, and sentences, and instead use it as fodder for whatever random ruminations they have about letters, words, and sentences.  The result will be, on the one hand, a combination of dull head-nodding and robotic participation, and on the other hand, puzzlement about this or that aspect of the language on display which a student happens to notice, but is not explained.  The smarter (or luckier) students will learn much from the examples, regardless of what the ostensible lesson of the day is; the duller (or unluckier) students will not glean so much, and simply find the whole exercise boring.

Evidently, the curriculum designers had carefully analyzed, conceptually, the steps that a child must have gone through in order to learn how to read.  The idea is that each step is, then, to be explicitly taught to children.  "After all," the designers must be reasoning to themselves, "what better way to guarantee that a child understands a principle than to try, creatively of course, to teach the principle?  Once a child has been exposed to all the different principles needed to learn language, they'll be fluent readers!"  The designers even evidently prise out principles that are used, but probably never grasped explicitly, by children--such as that there is one spoken word for each written word--and attempt to teach those explicitly.

You might think that I am criticizing the curriculum designers because they are having the teachers teach explicitly, that they are being "instructivists" instead of "constructivists."  But that would be wrong; my criticism has nothing to do with instructivism versus constructivism.  It has to do with the order in which things are taught and the folly of standardizing what can't effectively be standardized.

There is a similar and well-discussed problem with the now-old movement called the "New Math," in which very abstract principles of mathematics, some of which were heretofore not discussed until high school or college, are taught to children.  The suggestion was that it would make children deep thinkers by teaching them about set theory and variables and other extremely abstract stuff when they are in early elementary grades.  The geniuses behind this movement evidently looked at the mathematics curriculum, noticed that, conceptually, it can be analyzed as Russell and Whitehead did in Principia Mathematica, and then had the brilliant idea that by teaching such principles to young children, one would give them a deeper understanding of mathematics.  A more boneheaded pedagogical notion can scarcely be conceived.  The entire movement, like that reading exercise I saw, is based on a very simple-minded error:

It is most efficient to teach children according to the order in which we, abstract-thinking adults, break down and analyze things logically.  Doing so ensures that children understand the matter deeply and critically, as we adults do; they cannot fail to comprehend if simple but powerful principles are introduced explicitly.

That, I'm saying, is wrong, but a lot of educationists seem to believe it and design our children's schooling based on it.

This is also what phonics workbooks and curricula often do--thereby giving phonics a bad name, when in fact as a method it is the best available.  You just don't have to get children to learn the abstract theory of phonics, of course, nor do you have to expect every child to learn the same phonics rules at the same time.

Anyway, in the grip of this widespread error, curriculum designers proceed to lay out scripts, in textbooks, workbooks, and lesson plans, that teachers and their charges are supposed to follow.  Students thereby systematically absorb the knowledge that the designers have broken into convenient, bite-sized chunks, presented in creative, fun, engaging ways--or that's how it's supposed to work.  But it doesn't work that way.  This kind of pedagogy obviously can work at the high school and college level, when the students are capable of abstract thought and gleaning abstract principles efficiently, but it obviously does not work for younger children.

As everyone (who has not been confused by college professors) knows perfectly well, children learn abstract principles gradually, by inferring from many instances.  Exactly when any given child happens to grasp a principle--when the light goes on--is completely unpredictable.  You simply cannot guarantee, for a classroom of students, that all of the lights will go on at once.  Now, I don't doubt that this can happen, and probably has happened, but only occasionally and with a really brilliant teacher and under highly contrived circumstances.  But if I am correct and children do learn different abstract principles at different times and under different circumstances, mainly by reflecting on many instances, then attempting to lead a whole class through by the hand, getting them all to grasp the principles at the same time and in the same order, is a fool's errand.

This is true not just of learning to read and mathematics, subjects which can be, after all, highly abstract.  Something similar is also quite true of more concrete subjects such as history and literature.  Different children fail to understand different pieces of vocabulary, all of which are essential to understanding a narrative.  Moreover, some children are ready to read a certain book, or are highly interested in it, while others aren't prepared (they don't have some basic concepts) or will never be interested in it.  What they need, of course, is individualized attention to the vocabulary of texts and individualized choices of texts.  The error (similar to the one identified above) seems to be:

We have a rough-and-ready idea of what children should read, and what topics they should study; we've got the book list and standards all mapped out.  The way to guarantee that students learn these texts and topics is quite simply to prescribe them and lead the children through them all at once, teaching the things they need to know.

Wrong.  As an advocate of liberal education, I of course agree that it's important to read certain books; I have nothing against book lists or even standards, per se.  But when books are best to read, and when certain standards are addressed, is, like it or not, a highly individual affair.  I'm merely pointing out a fact about the minds of children: they are ready to absorb things at different times, and the best way to teach them those books and topics differs greatly because abilities and proclivities develop differently.

When you get down to it, the problem really lies in a system that attempts to prescribe, centralize, or standardize the development of the human mind, which is necessarily an individual affair.  This, ultimately, is why we're homeschooling.  Most schools operate on the notion that the learning process can be scripted and applied to all equally, and that the script is best written by replicating some theoretician's abstract analysis of subject matters and skills, and then requiring all students to build up their mental contents by following the script.  Homeschooling allows the parent and child to work together to determine what the next best thing to learn is, and what the best way to learn it is.  It is grounded in the reality of what an individual boy or girl understands and appreciates right now, and builds logically on that.  Indeed, as a philosopher, I am very much a fan of system-building and abstract analysis, and as my own son's knowledge grows, I find myself thinking constantly about which part of the "edifice" should be constructed next.  In this way, a wholly individualized, ad hoc approach to education can still be fairly systematic.  But the thing that should be systematic is not the curriculum, but the child's mental development.


Should Science Communication Be Collaborative?

Plenary address at PCST-10 (10th conference of the International Network on Public Communication of Science and Technology), Malmö University, Malmö, Sweden, June 25, 2008.  A slightly abbreviated version of this was delivered.

I. The question, and some distinctions

Should science communication be collaborative?  There are two ways to understand this question, and so also two very different reactions to it.  One reaction is that science writing already is very collaborative.  Scientific articles are typically co-written by labs or by other collections of colleagues, because most experiments cannot be done by just one person; scientific discoveries are now typically made by several or many people cooperating.  So, of course science communication should be collaborative.

The other reaction understands me to be talking about collaboration in the wiki sense, or what I call radical collaboration.  And to that question there are typically mixed reactions.  On the one hand, what Wikipedia has done is very exciting, and if scientists can tap into the same sort of collaboration, perhaps great things will result.  On the other hand, scientists and scholars in general are very suspicious of the notion that anybody can edit our words.  Many scholars scoff at Wikipedia's motto—"you can edit this page"—as incontrovertible evidence that it cannot be very reliable.

The question I am interested in is actually the latter one: should science communication be radically collaborative?  So let me define this piece of jargon.  Collaboration is radical if it goes beyond two or more people merely working together.  In addition, the collaborators are self-selecting; they determine what they are going to do, and are not assigned their roles.  Finally, there is equal ownership or equal rights over the resulting work, or in other words, there is no "lead author."

So, should science communication be radically collaborative?  I cannot give you any simple answer to this question, but I do want to say that radical collaboration is part of our future, and will probably result in some amazing new scientific resources.  I'll be asking how big a part of our future it should be—as well as what we should not expect radical collaboration to do.

But first, it will be useful to draw a distinction between two kinds of scientific communication: original and derivative.  Original communication is aimed at advancing knowledge in the field with never-before-published findings, discoveries, first-hand accounts, survey data, theories, arguments, proofs, and so forth.  Typically, such communication takes the form of papers in peer-reviewed journals and online pre-print services, as well as conference presentations, posters, and some other things.  By contrast, derivative communication merely sums up what is already known, and takes the form of news and encyclopedia articles, textbooks, and popular science books and magazines.

I don't pretend that the distinction between original and derivative communication, if one examines it carefully, is easy to make.  One reason that it is difficult is that, whenever one reports scientific and other scholarly findings, analysis almost inevitably occurs; and sometimes, an analysis can be as interesting, challenging, and pathbreaking as the findings reported on.  So I imagine that such interesting analysis can be a borderline case between original and derivative communication.

There is another reason the distinction is difficult.  Frequently, we want to criticize certain published papers, which purport to present original findings, as being almost wholly derivative—they do not really advance the field at all.  I am told that this happens much more than it should, in scientific publishing.  So I admit that sometimes, purportedly original communication is actually derivative.

In fact, I will admit something more: it is far from clear what constitutes an advance in any given field.  If someone merely deduces something from previously published experimental findings, is that an advance?  Sometimes, sometimes not.  If someone does an experiment that is only trivially different from any of many already-published experiments, and obtains similar results, is that an advance?  Not necessarily, it seems to me.  If someone merely applies an established paradigm to a domain of knowledge for the first time in a published article, is that an advance?  Perhaps; but perhaps not, if the application was simply obvious.

So there are, I realize, several reasons to be critical of the distinction between original and derivative communication.  That admitted, I do think there are many perfectly clear cases of both original and derivative communication; in fact, I think most scientists and scholars would not have trouble classifying most communication in their fields as either original or derivative.  When Watson and Crick originally described the double helix, that was definitely original.  When Wikipedia, or a biology textbook, describes the double helix, that is definitely derivative.  And where we are uncertain, on philosophical grounds, about whether some finding really is original, at least we can tell whether the author is treating it as original.

I draw this distinction because I think that we might actually wish to give different answers to the question, "Should science communication be collaborative?" based on what type of science communication we're talking about.  In particular, I think it is very plausible that derivative science communication, like encyclopedia articles and science news reporting, are much more amenable to collaboration than original science communication.  I think, moreover, that in explaining this we will uncover some very interesting insights, or at least questions, about collaboration and perhaps even about science communication itself.

II. Derivative science communication

Let me begin with derivative science communication—again, things like encyclopedias, science news reporting, and textbooks.

Over the last few years, I have conversed with dozens of scholars and scientists about how to set up wikis or other collaborative knowledge communities.  There is a fascinating pattern to these conversations.  They go like this.  The scientist, impressed by the vast quantities of information in Wikipedia, tells me: "It is amazing what can be accomplished when many people come together, from around the world, to sum up what is known.  What would happen if we tried this in our field?  The resulting resource could be a central, authoritative clearing-house of information for everyone in the field, as well as for the general public.  So, what is the best way to set up ‘a Wikipedia' in our field?"

This is an interesting question, but it is not the question that they end up answering.  Instead, the scientist goes off and consults with his colleagues, and then I hear this: "We have a couple of concerns.  First, we are concerned about lack of credit in the Wikipedia system.  The careers of scientists depend on names being on their publications.  So we want to make sure that authors are properly named and identified on articles.  Second, we are a little nervous about the idea that just anybody can edit anybody's articles.  We understand that it's important to be collaborative, but we think it is reasonable to nominate a lead author or lead reviewer for each article, and restrict participation to experts.  So, what do you think of that?"

I think that the scientist and his colleagues are confused in a fascinating way.  I try to be diplomatic when I say this, of course.  But the scientist seems not to realize two facts:

  1. If you name authors, you award lead authorship or editorship for articles, and you carefully restrict who may participate, then you are not building a collaborative community in anything like the radical sense.  You are merely using a wiki to replicate an older sort of collaboration, common in scientific writing.
  2. It is precisely the newer, more radical sort of collaboration that explains Wikipedia's success.  Wikipedia is successful in large part precisely because everyone feels empowered to edit any article.  If you disempower people, they won't show up.

As a result, there is no reason to think that the scientist's group will enjoy success anything like Wikipedia's, because they have actually rejected the Wikipedia model.

I am not saying that using wiki software to replicate old-fashioned systems won't work at all. In fact, in 2005, I helped set up such a system myself, called the Encyclopedia of Earth, and it seems to be working reasonably well so far—but, as far as I know, not much actual collaboration goes on, and a large part of the few thousand articles that they have were imported from other sources.  Another scientist-run encyclopedia, Scholarpedia, has a somewhat similar set of policies, and has produced even fewer articles.  To be sure, the quality of the articles produced by these projects is good.  But it seems to me that the articles have little chance of ever fulfilling the original, high hopes of the project designers.  Many of them won't be incredibly detailed, balanced, authoritative, and a pleasure to read, which is what one might hope to get from a large group of experts coming together to work on a piece of text. Nor do such projects have any chance of achieving the depth of coverage that Wikipedia has.  In short, as far as I can tell, the most that projects like the Encyclopedia of Earth and Scholarpedia can hope to achieve is to produce a free version of old-fashioned sorts of encyclopedias.  I do not mean to say that there is something wrong with that.  I merely claim that they will not enjoy the advantages and potential that a radically collaborative project has, the advantages and potential that made them imitate the Wikipedia model in the first place.

This, then, raises a question.  Do those scientists, who have rejected the Wikipedia model, have a legitimate complaint about it?  Or have they made a mistake in rejecting it?  I think they are partly right in rejecting the Wikipedia model, but also partly mistaken.  Let me clarify, first by explaining what they have gotten right.

Essentially, the scientists I've advised are quite right to reject the wide-open Wikipedia model, according to which anyone can alter any article regardless even of whether the person has logged into the system or is using his or her real name.  Wikipedia's rock-solid commitment to anonymous contribution explains many of its problems, in my opinion.  It explains why Wikipedia has so much vandalism and people editing abusively and in bad faith; it also explains why the Wikipedians have never been able to enforce some of their own basic principles, such as neutrality and politeness.  Scientists and scholars generally are very well justified in rejecting Wikipedia's anonymity policy.  I have argued for this thesis elsewhere,[1] and can't spend the time to explain arguments now.

So that's why my scientist colleagues were right to reject the Wikipedia model.  But they are also mistaken to believe that articles must be signed by their authors, that they must have lead authors, and that participation should be restricted to experts.  They believe they must adopt these policies because, otherwise, the result will be unreliable or of poor quality.  They appear to think that, since all trustworthy encyclopedias in the past had signed articles, lead authors, and participation restricted to experts, there is no way to design an encyclopedia project that changes these features.

Now, I don't have time in this paper to argue for this point in detail, but I simply want to point to the example of the Citizendium, which is a wiki encyclopedia project I started a year and a half ago.  We do not sign articles; we do not have lead authors; and we open participation up to anyone who can make a positive contribution to the project.  But we do make a role for experts.  Despite the fact that we reject so much of the traditional model of content production, the quality of our articles is remarkably good, especially for such a young project.  The articles that have been approved by our expert editors, in particular, are extremely readable, as well as being authoritative.  My point, then, is that it is possible to have a radically collaborative system that produces high-quality, credible content.  So if my scientist colleagues rejected radical collaboration because they thought the results would necessarily be of substandard quality, they were simply mistaken, as our experience with the Citizendium shows.  Moreover, I should point out that we are far more productive than Scholarpedia or the Encyclopedia of Earth; we have over 7,000 articles and are growing daily.

I can imagine a reply to this, however.  One might concede that the Citizendium's articles are, or will be, of reasonably good quality.  But will they be better than articles written by small groups of experts?  Not necessarily, of course.  Still, I would like to give you some general reasons to think that they could be better.  More precisely, I want to answer this question: is there something about radical collaboration per se that improves the quality of articles?  I think so.

Given enough time, an article that is written with a large and diverse set of authors—particularly if it is under the gentle guidance of experts—can be expected to be lengthier, broader in its coverage, and fairer in its presentation of issues, than an article written by a single or a few hand-chosen authors.  It will be longer, because many collaborators will compete with each other to expand the article.  It will be broader in its coverage, because the collaborators often can fill up gaps in exposition that others leave.  It will be fairer in its presentation of issues, because self-selecting collaborators in a very open project will tend to have a diversity of views, and they must compromise in order to work together at all.

In short, radical collaboration naturally pushes articles in the direction of being longer, more detailed, and fairer.  When the collaboration is gently guided—not led and controlled—by experts, and when the collaborators respect the experts and are willing to defer to from time to time and when necessary, the resulting articles can be outstanding.  A number of the Citizendium's approved articles are outstanding for these very reasons.  We have quite a few outstanding unapproved articles as well.

So far, I have spoken only about one kind of derivative communication: encyclopedias.  But there are other kinds, as I said: journalism, textbooks, and popular science writing, for instance.  I could discuss each of these, but again I lack the time.  Instead, I want to make a general point about all of them.

Often, in expository writing and even more in fiction writing, we derive value from the text precisely because it is personal, because it presents a single, unique point of view that we find compelling.  We find the writing interesting because we find an individual mind interesting.  Why are we fascinated by the minds of Stephen Hawking, Richard Feyman, Stephen Jay Gould, or Steven Pinker?  (And for that matter, why are so many famous scientists named Stephen?)  Well, it seems that, in works by these authors, the addition of another author might subtract from the value of their text.  Why is that?  Why is it that we find individual minds interesting?  It is not because their thoughts are more accurate or more exhaustive.  Rather, a text with a single author, especially one who is expressing his personality, is a window into another mind, and so it represents how we, each of us individually, might also want to think.  Only an individual seems to be able to serve as a credible model of how to think about the world; and, for whatever reason, we do take other thinkers as models.  Collective productions can convey useful information, of course, but they necessarily do not express the views of any one person.  They are largely useless as complex, full-bodied, human models after which we can pattern our own thinking.

But almost all encyclopedia articles,[2] most news articles, and some textbooks are used just to get information, not to serve as an entrée into an interesting perspective on the world.  Idiosyncracy and personality are annoying when we merely want information.  When it's bare information we want, we don't care about persons—only facts.  The point, then, is that radical collaboration is suitable for gathering impersonal information.  That, we might say, is its proper function.

III. Original science communication

Up to this point, I've been talking about whether derivative science communication should be collaborative; the answer, in short, is yes and no.  So now let me talk about whether original science communication should be collaborative.  But first, I think we need to examine whether, and in what sense, original science communication, such as papers that express new research findings, can be radically collaborative.  Maybe a better question is this: to what extent can original science writing be collaborative?  We already know that scientific research can be collaborative in the old-fashioned sense, because it is so often is, in fact.  What is the feasibility of making it more collaborative?

Applying certain aspects of the Wikipedia model to original science communication—and even the Citizendium model—strikes me as simply impossible.  For instance, if research papers were not signed, but instead were attributed to a nameless collective, the traditional motive of scholarship—personal glory, the honor of one's peers and of history—would disappear.  In short, I very much doubt scientists would participate at all in a researech collective without definite personal credit.  We may not need prominent personal credit to create derivative works collaboratively, but original works are another matter entirely.  Indeed, the economics of the two kinds of communication are different, because our motives are different.  Many scholars and scientists will not write an encyclopedia article, news article, textbook, or a popular science book without some compensation.  But the same people routinely publish much more difficult research papers and monographs with no monetary compensation.  The glory and honor of discovery is the motivation for such work.  Wiki work is just not that glorious, or at least, not in the same way.

Another aspect of radical collaboration is open authorship, that is, the authors select themselves.  This again seems impossible, or very difficult at best, for original science communication.  For one thing, original communication expresses original thoughts, and such thoughts necessarily tend to be controversial and difficult.  To open up authorship of original work very wide would, hence, permit the participation of persons who disagree with the conclusions or who don't even understand them.  But if participation is limited to like-minded scholars who understand the research, the collaboration can no longer be called "radical."  It's just a variant on old-fashioned collaboration.

In fact, beyond issues of feasibility or difficulty, I detect an incoherence in the very idea that original research might be radically collaborative.  The act of publishing a research paper does more than merely convey some findings; it also stakes a claim, that is, it has the force or effect of attaching some definite name or names to the findings.  To make original science communication radically collaborative would be to nullify the act of taking credit.  If we were to list as co-authors people who are not responsible for the research, the author list would not longer be honoring those people actually responsible for the finding.  It would just be a list of people who happened to work on the paper that summed up the research, even if some of the people listed had none of the thoughts or conclusions contained in the paper.

One might say that open collaboration on communication of original research would help to elaborate the full range of arguments and analysis releated to the research.  But that already happens, I suppose, in the give-and-take of scientific and scholarly conversation that happens before and after a paper is published.  Indeed, it has often been observed that science and scholarship generally are massively collaborative in the sense that researchers build on each others' work; it was Newton who pointed this out when he said that he saw farther only because he stood on the shoulders of giants.  I have no doubt that new Internet methods can and already do facilitate this very old sort of scientific collaboration.  But I see no need, in addition, to permit others, who had nothing to do with some research, to participate in the writing itself of original research findings.

That said, there is at least one way that original science communication might be amenable to radical collaboration: I mean what has been called "open research" and "open science."  As I understand it, this involves inviting others to participate actively in a study—not merely collaborating on the writing, but actually doing the research for, designing, and performing experiments, surveys, and so forth.  This is something I know very little about, and I will not embarrass myself by pretending to know more than I do.  An example of such research, perhaps, was the lightning-fast investigation in multiple labs that identified the avian flu virus.  Such research can be somewhat open and self-selecting.  So perhaps that is one sense, and a very interesting sense, in which original science communication can be radically collaborative.  I'm afraid I can't presume to say anything else about that, though.

IV. Conclusion

So, to sum up, should scientific communication be collaborative?  I've made it clear, I hope, that it depends on the type of communication.  Derivative communication that merely aims to express impersonal information can, and in some cases perhaps should, become radically collaborative; the Citizendium system shows how.  But when a specific personality, or point of view, forms an important part of the value of the communication, collaboration is denaturing and devaluing.[3] And original scientific communication should be collaborative only to the extent that the research it reports has been collaborative.

In the interests of keeping this paper short and provocative, I have not answered many important questions.  Perhaps the most important unanswered question is: what constitutes a contribution to knowledge?  Also, I said that some derivative communication should not be collaborative, because its value depends on its coming from an individual mind; I said that the productions of individual minds sometimes have some special value because they "model" how to think about the world.  What do I mean by that, and what is valuable about it?  I also asserted that scientists would not participate in research programs without the expectation of credit.  That seems obvious, but perhaps I should have explained why not; that is really a core issue.  Finally, I only barely glanced at the prospects of open research, or open science.  What is such research, really?  Is it radically collaborative in anything like the wiki sense, or is it merely the practice of making our research available to others for free, and talking a lot?

Without having given clearer answers to these fundamental questions, I can't say I have adequately discussed whether science communication should be collaborative.  Clearly, this is a big question, with many ramifications.  But I do hope I have at least introduced a few of the salient issues and given you something interesting to think and talk about.


[1] "A Defense of Modest Real Names Requirements," delivered at the Harvard Journal of Law & Technology 13th Annual Symposium: Altered Identities, Harvard University, Cambridge, Massachusetts, March 13, 2008.  Available at http://www.larrysanger.org/realnames.html

[2] Diderot's Encyclopedie and the 11th Encyclopedia Britannica could be notable exceptions.  Those encyclopedias, perhaps the best-known encyclopedia editions in English, both featured articles by famous contemporary thinkers who expressed their own idiosyncratic views.  To be sure, some people reject all notions of objectivity and neutrality and prefer the openly personal and idiosyncratic, even in encyclopedias.  This is not the norm, or the ideal at least, for reference work today.

[3] Lawrence Lessig's attempt to make a wiki out of his second version of his book Code (called Code 2.0), demonstrates the difficulty of watering down the ideas and voice of an interesting person.


A Defense of Modest Real Name Requirements

Lunchtime speech at the Harvard Journal of Law & Technology 13th Annual Symposium: Altered Identities, Harvard University, Cambridge, Massachusetts, March 13, 2008.

I. Introduction

Let me say up front, for the benefit of privacy advocates, that I agree entirely that it is possible to have an interesting discussion and productive collaborative effort among anonymous contributors, and I support the right to anonymity online, as a general rule. But, as I'm going to argue, such a right need not entail a right to be anonymous in every community online. After all, surely people also have the right to participate in communities in which real-world identities are required of all participants—that is, they have a right to join voluntary organizations in which everyone knows who everyone else really is. There are actually quite a few such communities online, although they tend to be academic communities.

Before I introduce my thesis, I want to distinguish two claims regarding anonymity: first, there is the claim that personal information should be available to the administrators of a website, but not necessarily publicly; and second, there's the claim that real names should appear publicly on one's contributions. I will be arguing for the latter claim, that real names should appear publicly.

But actually, I would like to put my thesis not in terms of how real names should appear, but instead in terms of what online communities are justified in requiring. Specifically in online knowledge communities—that is, Internet groups that are working to create publicly-accessible compendia of knowledge—organizers are justified in requiring that contributors use their own names, not pseudonyms. I maintain that if you want to log in and contribute to the world’s knowledge as part of an open, community project, it’s very reasonable to require that you use your real name. I don't want, right now, to make the more dramatic claim that we should require real names in online knowledge communities—I am saying merely that it is justified or warranted to do so.

Many Internet types would not give even this modest thesis a serious hearing. Most people who spend any time in online communities regard anonymity, or pseudonymity, as a right with very few exceptions. To these people, my love of real names makes me anathema. It is extremely unhip of me to suggest that people be required to use their real names in any online community. But since I have never been or aspired to be hip, that’s no great loss to me.

What I want to do in this talk is first to introduce the notion of an Internet knowledge community, and discuss how different types handle anonymity as a matter of policy. Then I will address some of the main arguments in favor of online anonymity. Finally, I will offer two arguments that it is justified to require real names for membership in online knowledge communities.

II. Some current practices in online knowledge communities

First, let me give you a definition for a phrase I'll be using throughout this talke. By online knowledge community I mean any group of people that gets organized via the Internet to create together what at least purports to be reliable information, or knowledge. And I distinguish between a community that purports to create reliable information from a community that is merely engaging in conversation or mutual entertainment. So this excludes social networking sites like MySpace and FaceBook, as well as most blogs, forums, and mailing lists. Digg.com might be a borderline case; calling that link rating website a “knowledge community” is again straining the definition, because I’m not sure that many people really purport to be passing out knowledge when they vote for a Web link. They’re merely stating their opinion about what they find interesting; that’s something different from offering up knowledge, it seems to me.

I want to give you a lot of examples of online knowledge communities, because I want to make a point. The first example that comes to mind, I suppose, would be Wikipedia, but also many other online encyclopedia projects, such as the Citizendium, Scholarpedia, Conservapedia, among many others (and these are only in English, of course). Then there are many single-subject encyclopedia projects, such as, in philosophy, the Stanford Encyclopedia of Philosophy and the Internet Encyclopedia of Philosophy; in biology, there is now the Encyclopedia of Life; in mathematics, there is MathWorld; in the Earth Sciences, there is the Encyclopedia of Earth; and these are only a few examples.

But that’s just the encyclopedia projects. There are many other kinds of online knowledge communities. Another sort would be the Peer to Patent Project, started by NYU law professor Beth Noveck. Perhaps you could consider as an online knowledge community the various pre-print, or e-print, services, most notably arXiv, which has hundreds of thousands of papers in various scientific disciplines. This might be straining the definition, however. If you consider a pre-print service an online knowledge community, then perhaps you should consider any electronic journal such a community; indeed, perhaps we should, but I won’t argue the point. Anyway, I could go on multiplying examples, but I think it would get tedious, so I’ll stop there.

The examples I've given so far have been mostly academic and professional communities. And here I finally come to my point: out of all the projects named, the only ones in which real names are not required, or at least not strongly encouraged, are Wikipedia and Conservapedia. This, of course, proves only that when academics and professionals get online, they tend to use their real names, which shouldn’t be surprising to anyone.

But there are actually quite a few other online knowledge communities that don’t require the use of real names. I have contributed a fair bit to one that is a very useful database of Irish traditional music—it’s got information about tunes and recordings--it's called TheSession.org. There are many other hobbyist communities that don’t require real names; just think of all the communities about games and fan fiction. Of course, then there are all the communities to support open source software projects. I doubt a single one of those requires the use of real names.

I haven't had time to do (or even find) a formal study of this, but I suspect that, as a general rule, academic projects either require or strongly encourage real names, while most other online knowledge communities do not. This should be no great surprise. Academics are used to publishing under their real names, but this is mostly for professional reasons; with the advent of the Internet, many other people are contributing to the world's knowledge, in various Internet projects, but they have no professional motivation to use their own real names. For some people--for example, a lot of Wikipedians--privacy concerns far outweigh any personal benefit they might get for putting their names on their contributions.

So, how should we think about this? Is it justifiable to demand anonymity in every online community, on grounds of privacy, or any other grounds? I don't think so.

III. Some arguments for anonymity

Next, let's consider some arguments for anonymity as a policy, and briefly outline some replies to them. By no means, of course, do I claim to have the last word here. I know I am going very quickly over some very complex issues.

A. The argument from the right to privacy. The most important and I think most persuasive argument that anonymous or pseudonymous contribution should be permitted in online communities is that this protects our right to privacy. The use of identities different from one’s real-world identity helps protect us against the harvesting of data by governments and corporations. Especially in open Internet projects, a sufficiently sophisticated search can produce vast amounts of data about what topics people are interested in, and much other information potentially of interest to one's employers, corporate competitors, criminals, government investigators, and marketers. This is a major and I think growing concern about Google, as well as many online communities like MySpace and FaceBook. Like many people, I share those concerns, even though personally my life is an open book online--maybe too open. Still, I think privacy is an important right.

But I want to draw a crucial distinction here. There is a difference between, on the one hand, using a search engine, or sharing messages, pictures, music, and video with one's friends and family, and on the other hand, adding to a database that is specifically intended to be consulted by the world as a knowledge reference. The difference is very obvious if you think about it. Namely, there is simply no need to make your name or other information publicly available, for you to do all the former activities. When you are contributing to YouTube, for example, you can achieve your aims, and others can enjoy your productions, regardless of the connection or lack thereof between your online persona and your real-world identity. So, in those contexts, the connection between your persona and your identity should be strictly up to you. For example, whether you let a certain other person, or a marketer, see your FaceBook profile also should be strictly up to you. These online services have become extensions of our real lives, the details of which have been and generally should remain private, if we want them to be.

We have a clear interest in controlling information about our private lives; we have that interest, of course, because it can be so easily abused, but also because we want to maintain our own reputations without having the harsh glare of public knowledge shone on everything we do. Lack of privacy changes how we behave, and indeed we might behave more authentically, and we might have more to offer our friends and family, if we can be sure that our behavior is not on display to the entire world.

I've tried to explain why I support online privacy rights in most contexts. But I say that there is a large difference between social networking communities like MySpace and FaceBook, on the one hand, and online knowledge communities like Wikipedia and the Citizendium, on the other hand. When you contribute to the latter communities, the public does have a strong interest in knowing your name and identity when you contribute. This is something I will come back to in the next part of this talk, when I give some positive arguments for real names requirements.

B. The argument from the freedom of speech. But back to the arguments for anonymity. A second argument has it that not having to reveal who you are strengthens the freedom of speech. If you can speak out against the government, or your employer, or other powerful or potentially threatening entities, without fear of repercussions, that allows you to reveal the full truth in all its ugliness. This is, of course, the classic libertarian argument for anonymous speech.

The most effective reply to this is to observe that, in general, there is no reason that online collaborative communities should serve as a platform for people who want to publish without personal repercussions. There are and will be many other platforms available for that. Indeed, specific online services, such as WikiLeaks, have been set up for anonymous free speech. Long may they flourish. Moreover, part of the beauty of the classical right to freedom of speech is that it provides maximum transparency. Anyone can say anything—but then, anyone else can put the first person’s remarks in context by (correctly) characterizing that person. Maximum transparency is the best way to secure the benefits of free speech.

I suspect it is a little disingenuous to suggest that anonymous speech is generally conducive to the truth in online knowledge communities. The WikiScanner, and the various mini-scandals it unearthed, actually helps to illustrate this point. It illustrated something that was perfectly obvious to anyone familiar with the Wikipedia system: that persons with a vested interest in a topic can and do make anonymous edits to information about that topic on Wikipedia. They are not telling truth to power under the cover of anonymity. Rather, they are using the cover of anonymity to obscure the truth. They would behave differently, and would be held to much more rigorous standards, if their identities were known. I want to suggest, as I'll elaborate later, that full transparency--including knowledge of contributor identities--is actually more truth-conducive than a policy permitting anonymity.

IV. Two reasons for real name requirements

Now I am going to shift gears, and advance two positive arguments for requiring real names in online knowledge communities. One argument is political: it is that communities are better governed if their members are identified by name. The other argument is epistemological: it is that the content created by an "identified" community will be more reliable than content created by an "anonymous" community.

A. The argument from enforcement. The first argument is one that I think you legal theorists might be able to sink your teeth into. Let me present it in a very abstract way first, and then give an example. Consider first that if you cannot identify a person who breaks a rule, it is impossible to punish that person, or enforce the rule in that case. Forgive me for getting metaphysical on you, but the sort of entity that is punished is a person. If you can't identify a specific person to punish, you obviously can't carry out the punishment. This is the case not just if you can't capture the perpetrator, but also if you have captured him but you can't prove that he really is the perpetrator. That's all obvious. But it's also the case that you can't carry out the punishment if the perpetrator is clearly identifiable in one disguise, but then changes to another disguise.

So far so good, I hope. Next, consider a principle that I understand is sometimes advanced in jurisprudence, which is that there is no law, in fact, unless it is effectively enforced. A law or rule on the books that is constantly broken and never enforced is not really, in some full-blooded important sense, a law. For example, the 55-mile-per-hour speed limit might not be a full-blooded rule, since you can drive 56 miles per hour in a 55 mile per hour zone, and never get a ticket. Obviously I am not denying that the rule is on the books; obviously it is. I am merely saying that the words on the books lack the force of law.

Now suppose, if you will, that in your community, your worst offenders can only rarely be effectively identified. You have to go to superhuman lengths to be able to identify them. In that case, you've got no way to enforce your rules: your hands are tied by your failure to identify your perpetrators effectively. But then, if you cannot enforce your rules, your rules lack the force of law. In a real sense, your community lacks rules.

I want to suggest that the situation I've just described abstractly is pretty close to the situation that Wikipedia and some other online communities are in. On Wikipedia, you don't have to sign in to make any edits. Or, if you want to sign in, you can make up whatever sort of nonsense name you like; you don't have to supply a working e-mail address, and you can make as many Wikipedia usernames as your twisted heart desires. Of course, no one ever asks what your real name is. In fact, Wikipedia has a rule according to which you can be punished for revealing the real identity behind a pseudonym.

This all means that there is no effective way to identify many rulebreakers. Now, there is, of course, a way to identify what IP address a rulebreaker uses, but as anyone who knows about IP addresses knows, you can't match an IP address uniquely to a person. Sometimes, many people are using the same address; sometimes, one person is constantly bouncing around a range of addresses, and sharing that range with other people. So there is often collateral damage when you block the IP address, or a range of addresses, of a perpetrator. Besides, anyone with the slightest bit Internet sophistication can quickly find out how to get around this problem, by using an anonymizer or proxy.

That there is no effective way to identify some rulebreakers is a significant practical problem on Wikipedia, in fact. Wikipedians complain often and bitterly about anonymous, long-term, motivated trouble-makers who use what are called "sockpuppets"--that is, several accounts controlled by the same person. Indeed, this is Wikipedia's most serious problem, from the point of view of true-believer Wikipedians.

In this way, Wikipedia lacks enforceable rules because it permits anonymity. I think it's a serious problem that it lacks enforceable rules. Here's one way to explain why. Suppose that we say that polities are defined by their rules. If that is the case, then Wikipedia is not a true polity. In fact, no online community can be a polity if permits anonymous participation. But why care about being a polity? For one thing, Wikipedia and other online communities, which typically permit anonymity, are sometimes characterized as a sort of democratic revolution. On my view, this is an abuse of the term "democratic." How can something be democratic if it isn't even a polity?

There is another, shorter argument that anonymous communities cannot be democratic. First, observe that if it is not necessary to confirm a person’s identity, the person may vote multiple times in a system in which voting takes place. Moreover, if the identities of persons engaged in community deliberation need not be known, one person may create the appearance of a groundswell of support for a view simply by posting a lot of comments using different identities. But, for voting and deliberation to be fair and democratic, each person’s vote, and voice, must count for just one. Therefore, a system that does not take cognizance of identities is inherently unfair and undemocratic. I think anonymous communities cannot be fair and democratic.

But why should we care about our online communities being fair, democratic polities? Perhaps their governance is relatively unimportant. When it comes to whether a link is placed on the front page of Digg.com, or what videos are highly rated on YouTube, does it really matter if it's not all quite on the up-and-up?

Maybe not. I am not going to argue about that now. But matters are very different, I want to maintain, with online knowledge communities, which is the subject of this paper. Knowledge communities, I think, must be operated as fair, democratic, and mature polities, if they are open to all sorts of contributors and they purport to contain reliable information that can be used as reference material for the world. It makes a difference, I claim, if an online community purports to collect knowledge, and not just talk and share media among friends and family.

Why does it matter if a community collects knowledge? First, it's because knowledge is important; we use information to make important decisions, so it is important that our information be reliable. If you are not convinced, consider that many people now believe that false information caused the United States to go to war in Iraq. Consider how many innocent people are in prison because of bad information. These days, two top issues for scientists are also political issues: global warming and teaching evolution in the schools. Scientists are very concerned that persons in politically-powerful positions do not have sufficient regard for well-established knowledge. Whatever you think of these specific cases, all of which are politically charged, it seems clear enough that there is no shortage of examples that demonstrate that we do, as a society, care very much that our information be reliable--that we do not merely have random unjustified beliefs, but that we know.

The trouble, of course, is that as a society--especially as a global Internet society--we do not all agree on what we know. Therefore, when we come together online from across the globe to create collections of what call knowledge, we need fair, sensible ways to settle our disputes. That means we must have rules; so we must have a mature polity that can successfully enforce rules. And, to come back to the point, that means we must identify the members of these polities; we are well justified to disallow anonymous membership.

B. The epistemological argument. Finally, I want to introduce briefly an epistemological argument for real names requirements, which is distinguishable from the argument which I just introduced, even though it had epistemological elements too. Now I want to argue that using our real identities not only makes a polity possible, it improves the reliability of the information that the community outputs.

Perhaps this is not obvious. As I said earlier, some people maintain that knowledge is improved when people are free to "speak truth to power" from a position of anonymity. But, as I said, I suspect that in online communities like Wikipedia, a position of anonymity is used specifically to obscure the truth more than reveal it. Now, in all honesty, I have to admit that this might be rather too glib. After all, most anonymous contributors to Wikipedia aren't trying to reveal controversial truths, or cover them up; they are simply adding information, which is more or less correct. Their anonymity doesn't shield them from wrongdoing, it merely shields their privacy. As a result, why not say that the vast quantity of information found in Wikipedia--which is very useful to a lot of people--is directly the result of Wikipedia's policy of anonymity? In that case, anonymity actually increases our knowledge--at least the sheer quantity of our knowledge.

Can I refute that argument? I'm not sure I can, nor would I want to if it is correct. The point being made is empirical, and I don't know what the facts are. If anonymity does in fact have that effect, hooray for anonymity. I merely want to make a few relevant points.

I think that in the next five to ten years, we will see whether huge numbers of people are also willing to come together to work under their own real names. I don't pretend to be unbiased on this point, but I think they will be. I don't think that anonymity is badly wanted or needed by the majority of the potential contributors to online knowledge communities in general. Having observed these communities for about fifteen years, my impression is that people get involved because they love the sense of excitement they get from being part of a growing, productive community. My guess is that anonymity is pretty much irrelevant to that excitement.

Regardless of the role of anonymity in the growth of online resources, a real names policy has a whole list of specific epistemological benefits that a policy of anonymity cannot secure. Consider a few such benefits.

First, the author of a piece of work will be more careful than if she puts her real name on it: her real-world reputation is on the line. And I suppose being more careful will lead to more reliable information. This is quickly stated, and very plausible, but it is a very important benefit.

Second, a community all of whose members use their real names will, as a whole, have a better reputation than one that is dominated by pseudonymous people. We naturally trust those who are willing to tell us who they are. As a result, the community naturally has a reputation to live up to. There are no similar expectations of good quality from an anonymous community, and hence no high expectations to live up to.

Third, it is much harder for partisans, PR people, and others to use the system to cover up unpleasant facts, or to present a one-sided view of a complex situation. When real names are used, the community can require the subjects of biographies and the principals of organizations to act as informants. The Citizendium does this. Wikipedia can't, because this would require that people identify themselves.

V. Conclusion

I'm going to wrap up now. I've covered a lot of ground and I went over some things rather fast, so here is a summary.

I began by defining "online knowledge community," and showing with a number of examples that online academic communities tend to use (or strongly emphasize the use of) real names. Other sorts of online communities generally permit or encourage anonymity, because there is no career benefit to being identified, while there is a definite interest in privacy. I considered two main arguments (though I know there are others) for permitting anonymity as a matter of policy. One argument starts from the premise that we have an interest in keeping our personal lives private; I admit that premise, but I say that, when it comes to knowledge communities in particular, society has an overriding interest in knowing your identity. Another argument is a version of the classical libertarian argument for anonymous speech. I grant that society needs venues in which anonymous speech can take place; I simply deny that all online knowledge communities need play that role. Besides, anonymity is probably used more as a way to burnish public images than it is to "speak truth to power."

In the second half of the paper, I considered two main arguments (though again, there are others) for requiring real names as matter of policy in online knowledge communities. In the first, I argued that rules cannot be effectively enforced when rule-breakers cannot be identified. This is a problem, because we would like online knowledge communities to be fair and democratic polities; but when community members cannot be uniquely identified, this violates the principle of one person, one voice, one vote. Then I argued that the requirement of real names actually increases the reliability of a community's output. Since we want the output of knowledge communities, in particular, to be maximally reliable, we are well justified in requiring real names in such communities.


A compromise position that I favor would involve requiring real users’ names to be visible to other contributors; allowing them to mask their real names to non-contributors; and legally forbidding the use of our database to mine personal information. This compromise does not settle the theoretical issue discussed in the arguments that follow, of course.


How the Internet Is Changing What We (Think We) Know

A speech for "the locals"--Upper Arlington Public Library, January 23, 2008.  This is a more general discussion; the Citizendium is not mentioned once.

Information is easy, knowledge is difficult

There is a mind-boggling amount of information online. And this is a wonderful thing. I’m serious about that. A good search engine is like an oracle: you can ask it any question you like and be sure to get an answer. The answer might be exactly what you’re looking for, or it might be, well, oracular—difficult to interpret and possibly incorrect. I draw the usual distinction between knowledge and information. You can find information online very easily. Knowledge is another matter altogether.

Now, this is not something new about the Internet. It’s a basic feature of human life that while information is easy, knowledge is difficult. There has never been a shortage of mere data and opinion in human life. It’s a very old observation that the most ignorant people are usually full of opinions, while many of the most knowledgeable people are full of doubt. Other people are certainly sources of knowledge, but they are also sources of half-truths, confusion, misinformation, and lies. If we simply want information from others, it is easy to get; if we want knowledge in any strong sense of the word, it is very difficult. Besides that, long before the Internet, there was far more to read, far more television shows and movies to watch, than anyone could ever absorb in many lifetimes. Before the Internet, we were already awash in information. Wading through all that information in search of some hard knowledge was very difficult indeed.

Too Much InformationThe Internet is making this old and difficult problem even worse. If we had an abundance of information in, say, the 1970s, the Internet has created a superabundance of information today. Out of curiosity, I looked up some numbers. According to one estimate, there are now over 1.2 billion people online; Netcraft estimated that there are over 100 million websites, and about half of those are active. And those estimates come from over a year ago.

With that many people, and that many active websites, clearly there is, as I say, a superabundance of information. Nielsen ratings of Internet search showed that there were some six billion searches performed in December, 2007, in one month—that’s about 72 billion in a year! Google, by the way, was responsible for two thirds of those searches. Now, you might have heard these numbers before; I don’t mean to be telling you news. But I want to worry out loud about a consequence of this situation.

My worry is that the superabundance of information is devaluing knowledge. The more that information piles up on Internet servers around the world, and the easier it is for that information to be found, the less distinctive and attractive that knowledge will appear by comparison. I fear that the Internet has already greatly weakened our sense of what is distinctive about knowledge, and why it is worth seeking. I know this might seem rather abstract, and not something worth getting worked up about. Why, really, should you care?

It used to be that in order to learn some specific fact, like the population of France, you had to crack open a big thick paper encyclopedia or other reference book. One of the great things about the Internet is that that sort of searching—for very specific, commonly-sought-after facts—has become dead simple. Even more, there are many facts one can now find online that, in the past, would have taken a trip to the local library to find. The point is that the superabundance of information has actually made it remarkably easy to get information. Today, it’s easy not just to get some information about something or other, it’s easy to get boatloads of information about very specific questions and topics we’re interested in.

For all that, knowledge is, I’m afraid, not getting much easier. To be quite sure of an answer still requires comparing multiple sources, critical thinking, sometimes a knowledge of statistics and mathematics, and a careful attention to detail when it comes to understanding texts. In short, knowledge still requires hard thought. Sure, technology is a great time-saver in various ways; it has certainly made research easier, and it will become only more so. But the actual mental work that results in knowledge of a topic cannot be made much easier, simply because no one else can do your thinking for you. So while information becomes nearly instantaneous and dead simple, knowledge is looking like a doddering old uncle.

What do I mean by that? Well, you can find tons of opinions online, ready-made, but there is an interesting feature of a lot of the information and opinion you find online: not only is it easy to find, it is easy to digest. Just think of the different types of pages that a typical Web search turns up: news articles, which summarize events for the average person; blogs, which are usually very brief; Web forums, which only rarely go into depth; and encyclopedia articles and other mere summaries of topics. Of course, there are also very good websites, as well as the “Deep Web,” which contains things like books and journal articles and white papers; but most people do not use those other resources. The point is that most of the stuff that you typically find on the Internet is pretty lightweight. It’s Info Lite.

“Right,” you say, “what’s wrong with that? Great taste, less filling!” Sure, I like easy, entertaining information as much as the next guy. But what’s wrong with it is that it makes the hard work of knowledge much less appealing by comparison. For example, if you are coming to grips with what we should do about global warming, or illegal immigration, or some other very complex issue, you must escape the allure of all the dramatic and entertaining news articles and blog posts on these subjects. Instead, you must be motivated to wade through a lot of far drier material. The sources that are more likely to help you in your quest for knowledge look very boring by comparison. My point here is that the superabundance of information devalues knowledge, because the means of solid knowledge are decidedly more difficult and less sexy than the Info Lite that it is so easy to find online.

There is another way that the superabundance of information makes knowledge more difficult. It is that, for all the terabytes upon terabytes of information on the Internet, society does not employ many more (and possibly fewer) editors than it had before the advent of the Internet. When you go to post something on a blog or a Web forum, there isn’t someone called an editor who decides to “publish” your comment. The Internet is less a publishing operation than a giant conversation. But most of us still take in most of what we read fairly passively. Now, there’s no doubt that what has been called the “read-write Web” encourages active engagement with others online, and helps us overcome our passivity. This is one of the decidedly positive things about the Internet, I think: it gets people to understand that they can actively engage with what they read. We understand now more than ever that we can and should read critically. The problem, however, is that, without the services of editors, we need our critical faculties to be engaged and very fine-tuned. So, while the Internet conversation has instilled in us a tendency to read critically, still, without the services of editors, there is far more garbage out there than our critical faculties can handle. We do end up absorbing a lot of nonsense passively: we can’t help it.

In short, we are reading reams of content written by amateurs, without the benefit of editors, which means we must as it were be our own editors. But many of us, I’m afraid, do not seem to be prepared for the job. In my own long experience interacting with Internet users, I find heaps of skepticism and little respect for what others write, regardless of whether it is edited or not. Now, skepticism is all well and good. But at the same time, I find hardly anything in the way of real critical thinking. The very opinionated people I encounter online rarely demonstrate that they have thought things through as they should, given their strength of convictions. I have even encountered college professors who cite easy-to-find news articles in the commission of the most elementary of logical fallacies. So it isn’t necessarily just a lack of education that accounts for the problem I’m describing. Having “information at our fingertips,” clearly, sometimes makes us skip the hard thinking that knowledge requires. Even those of us who ought to know better are too often content to be impressed by the sheer quantity and instant availability of information, and let it substitute for their own difficult thought.

The nature and value of knowledge

Easy information devalues hard knowledge, I say. But so far I have merely been appealing to your understanding of the nature and value of knowledge. Someone might ask me: well, what do you mean by knowledge, anyway, that it is so different from mere information? And why does it matter?

Philosophers since Plato have been saying that knowledge is actually a special kind of belief. It must be true, first of all, and it must also be justified, or have good reasons or evidence to support it. For example, let’s suppose I read something for the first time on some random blog, such as that Heath Ledger died. Suppose I just uncritically believe this. Well, even if it’s true, I don’t know that it is true, because random blogs make up stuff all the time. A blog saying something really isn’t a good enough reason to believe it. But if I then read the news in a few other, more credible sources, then my belief becomes much better justified, and then I can be said to know.

Now, I don’t want to go into a lot of unnecessary details and qualifications, which I could, at this point. So let me get right to my point. I say knowledge is, roughly speaking, justified, true belief. Well then, I want to add that knowledge is difficult not because getting truth is difficult, but because justifying our beliefs is. In other words, it’s really easy to get truth. Google is a veritable oracle of truth. The problem is recognizing truth, and distinguishing it from falsehood. The ocean of information online contains a huge amount of truth. The difficulty comes in knowing when you’ve got it.

Well, that’s what justification is for. We use reasons, or evidence, to determine that, indeed, if we accept a piece of information, we will have knowledge, not error. But producing a good justification for our beliefs is extremely difficult. It requires, as I said before, good sources, critical thinking, sometimes a knowledge of statistics and mathematics, and a careful attention to detail when it comes to understanding texts. This all takes time and energy, and while others can help, it is something that one must do for oneself.

Here you might wonder: if justification, and therefore knowledge, is really so difficult, then why go to all the trouble? Besides, justification is not an all-or-nothing matter. How much evidence is needed before we can be said to know something? After all, if a blogger says that Heath Ledger is dead, that is at least some weak evidence that Heath Ledger is in fact dead. Do I really need stronger evidence? Why?

These are very difficult questions. The best brief answer is, “It depends.” Sometimes, if someone is just telling an entertaining story, it doesn’t matter at all whether it’s true or not. So it doesn’t matter that you know the details of the story; if the story entertains, it has done its job. I am sure that celebrity trivia is similar: it doesn’t matter whether the latest gossip in the Weekly World News about Britney Spears is true, it’s just entertaining to read. But there are many other subjects that matter a lot more. Here are two: global warming and immigration reform. Well, I certainly can’t presume to tell you how much evidence you need for your positions on these issues, before you can claim to have knowledge. Being a skeptic, I would actually say that we can’t have knowledge about such complex issues, or at least, not very certain knowledge. But I would say that it is still important to get as much knowledge as possible about these issues. Why? Quite simply because a lot is riding on our getting the correct answers, and the more that we study issues, and justify our beliefs, the more likely our beliefs are to be correct.

To passively absorb information from the Internet, without caring about whether we have good reasons for what we believe, is really to roll the dice. Like all gambling, this is pleasant and self-indulgent. But if the luck doesn’t go your way, it can come back to bite you.

Knowledge matters, and as wonderful a tool for knowledge as the Internet can be, it can also devalue knowledge. It does so, I’ve said, by making passive absorption of information seem more pleasant than the hard work of justifying beliefs, and also by presenting us with so much unedited, low-quality information that we cannot absorb it as carefully as we would like. But there is another way that the Internet devalues knowledge: by encouraging anonymity. So here’s a bit about that.

Knowledge and anonymity

We get much of our knowledge from other people. Of course, we pick some things up directly from conversation, or speeches like this one. We also read books, newspapers, and magazines; we watch informational television programs; and we watch films. In short, we get knowledge either directly from other people, or indirectly, through various media.

Now, the Internet is a different sort of knowledge source. The Internet is very different, importantly different, from both face-to-face conversation and from the traditional media. Let’s talk about that.

The Internet has been called, again, a giant conversation. But it’s a very unusual conversation, if so. For one thing, it’s not a face-to-face conversation. We virtually never have the sort of “video telephone” conversations that the old science fiction stories described. In fact, on many online knowledge websites, we often have no names, pictures, or any information at all, about the people that we converse or work with online. Like the dog in the famous New Yorker cartoon said, “On the Internet, nobody knows you’re a dog.”

In the three-dimensional online virtual world, Second Life, there is an elaborate system in which you can choose the precise physical characteristics for the person you are online—your “avatar.” Not surprisingly, in Second Life, there are a lot more beautiful and striking-looking people than there are in “First Life”—real life. This practice of make-believe is very self-conscious, and many academic papers have been written about how “identity” is “constructed” online in general.

When I went to make an avatar for myself for Second Life a few years ago, I was pretty uncomfortable representing myself as anything other than what I am. So I actually made an avatar that looks like me. (I didn’t really get it right.) I’ve always been personally uncomfortable representing myself online in any other way than how I really am. But I realize that I am unusual in this regard. Obviously, privacy matters.

Now, think of this. People who care very much about getting their facts right generally consult authoritative sources; they don’t usually get their knowledge from casual conversation with friends and relatives. But at least, when we do get knowledge from a friend or relative, we have some idea of how reliable they are. Maybe you have an eccentric acquaintance, for instance, who is a conspiracy theorist, and he doesn’t spend a lot of time considering the merits of his sources, or the plausibility of their claims. Let’s say you also know that he barely got through high school and basically doesn’t care what the mainstream media or college professors say. Your acquaintance may have many fascinating factoids and interesting stories, but probably, you aren’t going to take what he says very seriously.

But imagine if you were chatting online about politics or UFOs, or other weird stuff, with someone you didn’t know was actually your acquaintance. You might actually take him more seriously in that case. You might take his bizarre claims somewhat more seriously. I don’t mean that you would simply believe them—of course you wouldn’t—but you would not have any specific reasons to discount them, as you would if you knew you were talking to your acquaintance. Your only positive reason to discount the claims would be: I don’t know this person, this person is anonymous. But you know that there can be brilliant and reliable people anonymous online, as well as thoroughly unreliable people.

Well, I think many of us would actually trust an anonymous person more than we would trust our more eccentric acquaintances. Now don’t get me wrong, I don’t mean to accuse anyone of being a dupe. Of course, we are able to spot really daft stuff no matter who it comes from. But without knowing who a person is, we are operating without a basic bit of information that we are used to having, in evaluating what people tell us face-to-face. If we lack any information at all about how reliable a source is, we will not simply conclude that the source is completely unreliable; we will often give the person the benefit of the doubt. And that is sometimes more respect than we would give the person if we knew a few basic facts about him or her.

More generally, there is a common attitude online that it is not supposed to matter, in fact, who you are. We are all perfectly equal in many online communities, except for what we say or do in those communities. Who we are offline is not supposed to matter. But it does matter, when it comes to evaluating what people say about offline topics, like science and politics. The more time we spend in the Internet’s egalitarian communities, the more contempt we might ultimately have for information about a person’s real-world credibility. The very notion of personal credibility, or reliability, is ultimately under attack, I think. On a certain utopian view, no one should be held up as an expert, and no one should be dismissed as a crackpot. All views, from all people, about all subjects, should be considered with equal respect.

Danger, Will Robinson! Personal credibility is a universal notion; it can be found in all societies and throughout recorded history. There is a good reason that it is universal, as well: knowledge of a person’s credibility, or lack thereof, is a great time-saver. If you know that someone knows a lot about a subject, then that person is, in fact, more likely to be correct than some random person. Now, the expert’s opinion cannot take the place of thought on your part; usually, you probably should not simply adopt the expert’s opinion. It is rarely that simple. But that doesn’t mean the information about personal credibility is irrelevant or useless.

Two ideas for a solution

So far, I have mainly been criticizing the Internet, which you might find it odd for me to do. After all, I work online.

I don’t think that the Internet is an unmitigated bad influence. I won’t bore you by listing all the great things there are about the Internet, like being able to get detailed information about every episode of Star Trek, without leaving home, at 3 AM. Besides, I have only focused on a small number of problems, and I don’t think they are necessarily Earth-shatteringly huge problems, either. But they are problems, and I think we can do a little bit to help solve them, or at least mitigate them.

First, we can make a role for experts in Internet communities. Of course, make the role so that does not conflict with what makes the community work. Don’t simply put all the reins of authority in the hands of your experts; doing that would ensure that the project remains a project by and for experts, and of relatively little broader impact. But give them the authority to approve content, for example, or to post reviews, or other modest but useful tasks.

My hope is that, when the general public work under the “bottom up” guidance of experts, this will have some good effects. I think the content such a community might produce would be more reliable than the run of the mill on the Internet. I would also hope that the content itself will be more conducive to seeking knowledge instead of mere information, simply by modelling good reasoning and research.

I do worry, though, that if expert-reviewed information online were to become the norm, then people might be more likely to turn off their critical faculties.

Second, we can create new communities, in which real names and identities are expected, and we can reward people in old communities for using their real names and identities. This is something that Amazon.com has done, for example, with its “real name” feature on product reviews. If contributors are identified, we could use the same sort of methods to evaluate what they say online, that we would use if we were to run into them on the street.

I began by laying out a general problem: superabundance of information online is devaluing knowledge. I don’t know if we can really solve this problem, but the two suggestions I just made might go a little way to making it a little better. If we include a modest role for experts in more of our Internet communities, we’ll have better information to begin with, and better role models. Moreover, if we identify the sources of our information, we will be in a better position to evaluate it.


The New Politics of Knowledge

Speech delivered at the Jefferson Society, University of Virginia, Charlottesville, Virginia, November 9, 2007, and at the Institute of European Affairs, Dublin, Ireland, September 28, 2007, as the inaugural talk for the IEA's "Our Digital Futures" program.

I want to begin by asking a question that might strike you as perhaps a little absurd. The question is, "Why haven't governments tried to regulate online communities more?" To be sure, there have been instances where governments have stepped in. For instance, in January of last year in Germany, the father of a deceased computer hacker used the German court system to try to have an article about his son removed from the German Wikipedia. As a result, wikipedia.de actually went offline for a brief period. It's come back online, of course, and in fact the article in question is still up.

Here's another example. In May of last year, attorneys general from eight U.S. states demanded that MySpace turn over the names of registered sex offenders lurking on the website, which as you probably know is heavily frequented by teenagers. The website deleted pages of some 7,000 registered sex offenders. And the following July, they said that in fact some 29,000 registered sex offenders had accounts, which were subsequently deleted.

Those are just a few examples. But we can make some generalizations. The Internet is famously full of outrageously false, defamatory, and offensive information, and is said to be a haven for criminal activity. This leads back to the question I asked earlier: why haven't governments tried to regulate online communities even more than they have?

We might well find this question a little absurd, especially if we champion the liberal ideals that form the foundation of Western civil society. Indeed, no doubt one reason is our widespread commitment to freedom of speech. But consider another possible reason—one that, I think, is very interesting.

Governments, and everyone else, implicitly recognize that social groups, however new and different, have their own interests and are usually capable of regulating themselves. It is a truly striking thing that people come together from across the globe and, out of their freely donated labor and strings of electrons, form a powerful new corporate body. When they do so—as I have repeatedly observed—they develop a sense of themselves as a group, in which they invest some time and can take some pride, and which they govern by rules.

In fact, these groups are a new kind of political entity, the birth of which our generation has been privileged to witness. Such groups are not like supra-national organizations, like the United Nations; nor are they like international aid organizations, like Doctors Without Borders; nor are they quite like international scientific groups, like the Intergovernmental Panel on Climate Change. The existence and primary activity of these online communities is all online. Their membership is self-selecting, international, and connected online in real time. This makes it possible for enormous numbers and varieties of groups to arise, of arbitrary size and arbitrary nationality, to achieve arbitrary purposes. They essentially make up a new kind of political community, a cyber-polity if you will, and so there is a presumption that they can regulate themselves. Government steps in, as in the case of MySpace, only when they cannot regulate themselves responsibly.

The idea that online communities are a kind of polity is, I think, very suggestive and fruitful. I want to talk in particular about how online communities, considered as polities, are engaged in a certain new kind of politics—a politics of knowledge. Let me explain what I mean by this.

Speaking of a "politics of knowledge," I assume that what passes for knowledge, or what we in some sense take ourselves to know as a society, is determined by those who have authority or power of a sort. You don't of course have to like this situation, and you might disagree with the authorities, or scoff at their authority in some cases. Nevertheless, when for example professors at the University of Virginia say that something is well known and not seriously doubted by anyone who knows about the subject, those professors are in effect establishing what "we all know," or what we as a society take ourselves to know. Since those professors, and many others, speak from a position of authority about knowledge—a powerful force in society—surely it makes some sense to speak of a politics of knowledge. I just hope you won't understand me to be saying that what really is known, in fact, is determined by whoever happens to be in authority. I'm no relativist, and I think the authorities can be, and frequently are, wrong.

If we talk about a politics of knowledge, and we take the analogy with politics seriously, then we assume that there is a sort of hierarchy of authority, with authority in matters of knowledge emanating from some agency that is "sovereign." In short, if we put stock in the notion of the politics of knowledge, then we're saying that, when it comes to knowing stuff, some people are at the top of the heap.

Our new online communities—our cyber-polities—are increasingly influential forces, when it comes to the politics of knowledge. When Wikipedia speaks, like it or not, people listen. So in this talk I want to discuss in particular something I call the new politics of knowledge. Any talk of a new politics of knowledge raises questions about what agency is sovereign. Well, it is often said that in the brave new world of online communities, everyone is in charge. Time Magazine's "Man of the Year" is, by practice, usually some influential political figure. When its "Person of the Year" last year was "You," Time didn't break its practice. Time was rightly claiming that, through Internet communities we are all newly empowered. In the new politics of knowledge, we can all, through blogs, wikis, and many other venues, compete with real experts for epistemic authority—for power over what is considered to be known.

If this sounds like a political revolution, that's because it is. It is frequently described as a democratic revolution. So what I'm going to do in the rest of this talk is examine exactly what sense in which the new cyber-polities, like Wikipedia, do indeed represent a sort of democratic revolution. This discussion will have the interesting result that we should be more concerned than we might already be about the internal governance of Internet communities—because that internal governance has real-world effects. And I will conclude by making some recommendations for how cyber-polities should be internally governed.

As a philosopher, I find myself impelled to ask: what exactly is democratic about the so-called Internet revolution?

Democracy in one very basic sense means that sovereignty rests ultimately with the people, that is, with all of us. Bearing that in mind, the new Internet revolution might be democratic, I think, both in a narrow sense and in a broad sense. The narrow sense concerns project governance: the new content production systems are themselves governed ultimately by the participants, and for that reason can be called democratic. In the broad sense, the Internet revolution gives everyone "a voice" which formerly many did not have, a stake in determining "what is known" not just for a narrow website or Internet practice, but for society as a whole. To draw the distinction by analogy, we might say that each online community has a domestic policy, about its own internal affairs, and a foreign policy, through which it manages its influence on the world at large.

Now, I'd like to point something out that you might not immediately notice. It is that the broad sense depends in a certain way on the narrow sense. The contributors are ultimately sovereign in various Internet projects, and that is precisely why they are able to have their newfound broader influence over society. Let's take Digg.com as an example. This is a website that allows people to post any link, and then others vote, a simple up or down, on whether they "digg" the link. It's one person, one vote. Of course, no one checks anybody's credentials on Digg. The highest-voted links are placed most prominently on the website. So the importance of a Web article, and presumably whatever the article has to say, is determined democratically, at least as far as the Digg community goes. But Digg's influence goes beyond its own community. A relatively obscure story can become important by being highly rated on Digg. In this way, all those people voting on Digg—and these can be as expert as you hope, or as uneducated, ignorant, biased, immature, and foolish as you fear—they can wield a power to highlight different news stories, a power hitherto usually reserved only to professional journalists.

Similarly, Wikipedia articles are now well-known for being the #1 Google search result for many popular searches. Any website with that much reach is, like it or not, very influential. That is, in effect, practical epistemic authority. That is real authority, given to anyone who has the time and patience to work on Wikipedia and do the hand-to-hand battle necessary to get your edits to "stick" in Wikipedia articles. That power, to define what is known about a general topic, was formerly reserved only to the professional intellectuals who wrote and edited encyclopedias, and more broadly to experts generally speaking. And again, of course, no one checks anybody's credentials before they get on Wikipedia. So amateurs are to some extent displacing experts, in the new politics of knowledge.

So that's why we call the Internet revolution democratic. But this needs some qualification. There is one fundamental reason that we describe as "democratic" such websites as Digg, Wikipedia, MySpace, YouTube, and all the rest, and that is that anyone can, virtually without restriction, go to the website and get involved. This, however, is only to say that they have a certain benchmark level of "user empowerment," which we might call the "right to contribute." But frequently, a large variety of governance structures are superimposed upon this basic "right to contribute." While the content is generally determined by largely self-governing contributors, some policies and decisions are left in the hands of the website owners, like Slashdot and YouTube, who are officially answerable to no one else within the project. Granted, if these privileged persons anger their contributors, the contributors can vote with their feet—and this has happened on numerous occasions. And in some cases, such as Wikipedia, the community is almost completely self-governing. Still, we probably should qualify claims about the democratic nature of cyber-polities: just because there is a basic right to contribute, it does not follow that there will also be an equal right to determine the project's internal governance.

So, as I said before, the Internet revolution is democratic in the broad sense because it is democratic, however qualifiedly, in the narrow sense. In other words, internal Web project governance bears directly on real-world political influence. But how closely connected are Web community politics and real-world influence?

Consider Wikipedia again—and I think this is particularly interesting. If you've followed the news about Wikipedia at all in the last few years, you have might noticed that when they make larger changes to their policy, it is no longer of interest just to their contributors. It is of interest to the rest of the world, too. It gets reported on. Two recent news items illustrate this very well.

First item. A few months ago, a student posted a website, called the WikiScanner, that allows people to look up government agencies and corporations to see just who has been editing which Wikipedia articles. This was fairly big news—all around the world. I was asked to comment about the story by reporters in Canada and Australia. Journalists think it's absolutely fascinating that someone from a politician's office made a certain edit to an article about that politician, or that a corporation's computers were used to remove criticisms about the corporation. At the same time, reporters and others observe that Wikipedia's anonymity has allowed people to engage in such PR fiddling with impunity. And that is the interesting internal policy point: anyone can contribute to Wikipedia without identifying him- or herself. You can even mask your IP address, which those political aids and corporation employees should have done; all they had to do was make up some random username, which one can still do without giving Wikipedia an e-mail address, and then the WikiScanner couldn't track the IP address. Nobody who was signed in was caught by the WikiScanner. Anyway, it was an internal policy that has had some very interesting external ramifications.

Second item. It was reported recently by the London Times that the German Wikipedia would be changing its editing system. In the future, all edits by unregistered and newer contributors will have to be approved by the older contributors before they can appear on the website. In fact, this was old news—the system described has been under development for well over a year, and it still hasn't been put into use. Nevertheless, it has been touted as a very big concession on the part of Wikipedia. It's said now that Wikipedia has a role for "trusted editors" on the website, but this is incorrect; it has a role only for people who have been in the system for a while, and these can be very untrustworthy indeed. However unlikely this is to have any significant effect, it was still touted as important news. And again, what was touted as big news was a change in internal policy, the policy about how the wiki can be edited by newer and anonymous contributors. This is supposed to be important, because it might help make Wikipedia a more responsible global citizen.

In general, it is becoming increasingly clear that the "domestic policy," so to speak, of cyber-polities is closely connected with their real-world impact. Wikipedia isn't the only example I might give. Here's another—although in this case, the effect is economic, not epistemic. There is an amazingly huge website, called craigslist, which lists, they say, over 12 million new classified ads every month. This website has proven to be a real thorn in the side of local newspapers, which depend on revenue from ads. Increasingly, people are posting their classified ads in craigslist instead of in their local newspapers. This is the effect of a policy, an internal policy, that anyone can post an ad for free, except for employment ads in certain markets. What might have originally seemed to be an optional feature of a small Web community has turned out, in fact, to cost jobs at newspapers.

But let's get back to the politics of knowledge. In the intellectual sphere, I think the full power of collaboration and aggregation has yet to be demonstrated. Try to imagine Wikipedia done right—not just enormous, but credible and well-written. If this sounds impossible to believe, consider that just a few years ago, Wikipedia itself, a reasonably useful general encyclopedia with over two millions articles in English, would have sounded equally impossible to believe. I can tell you that, when Wikipedia was first starting out, there were many people who sneered that we didn't have a chance.

Let me describe briefly my new project, which is relevant here. It is called the Citizendium, or the Citizens' Compendium. It is a non-profit, free wiki encyclopedia that invites contributions from the general public—and to that extent it's like Wikipedia. There are three very important differences, however. First, we require the use of real names and do not allow anonymous contribution; we also require contributors to submit at least a brief biography. So we all know who we're actually working with. Second, we distinguish between rank-and-file authors, which do not require any special qualifications, and editors, who must demonstrate expertise in a field; our editors may approve articles, and they may make decisions about content in their areas of expertise. Still, they work side-by-side with authors on the wiki. Nobody assigns anybody any work; it's still very much a bottom-up process. Third, we are a rather more mature community. All contributors must sign onto a sort of social contract, which states the rules of the community; we expect people to behave professionally; and we have people called "constables" who are actually willing to enforce our rules by kicking out troublemakers.

So how is the project going? We started a pilot project just over a year ago, and in that time we created 3,500 articles, and we have over 2,000 authors and well over 200 expert editors on board. We also have more words than Wikipedia did after its first year—our average article is six times as long as the average Wikipedia article after its first year. Our pace of article production has accelerated—it has doubled in the past 100 days or so and tripled since last January. And we are pretty much free of vandalism, and I think our articles are pretty high-quality for such a wide-open project. The project is doing rather well, and I think that we are probably, with continued development, poised to replicate Wikipedia's sort of growth. We too could have a million articles in under ten years.

Well, imagine that the Citizendium had a million articles, together with loads of ancillary reference material such as images, tables, tutorials, and so forth—all free, credible, and managed by experts. The sort of influence that such a website would wield would, I think, far outweigh Wikipedia's. The one thing that really holds Wikipedia back, from the end user's perspective, is its reliability. So suppose there were a similar website that solved that problem.

If you ask me, this is somewhat of a frightening prospect. After all, already, far too many students and even members of the general public treat Wikipedia as if it were reliable. Already, for far too many students, Wikipedia is their only source of reference information. If humanity were to produce a similarly giant encyclopedia that were really reliable, you can just imagine how it would probably be received by the general public. It would become, essentially, the world's textbook and omnipresent reference library. There would be a general presumption that what it says is correct, and that if anyone asserts something in contradiction to it, they would have to explain in as much detail as they would have to do if they contradicted the Encyclopedia Britannica today. Sure, a good encyclopedia can be wrong; but it usually isn't. Unlike Wikipedia, it's innocent until proven guilty.

This is frightening, I say, precisely because of how powerful such a resource would be. Imagine the article about, for example, the Iraq War, after it had been written and rewritten, and checked and rechecked, by hundreds of real experts. It would no doubt be a thing of beauty, as I think the Citizendium's best articles are. But it would also be taken as the starting-point for serious conversation. What claims it makes could have real-world political ramifications, as much as, if not more than, any U.N. report. So you can easily imagine the attention given to major changes of policy, or to internal rulings on controversial cases in the project. Again: the internal policymaking for a truly successful collaborative reference project would have major external consequences.

We don't want governments to take over or closely regulate collaborative projects, but if they continue to act as irresponsibly as Wikipedia has, I fear that they might attempt to do so. That is, for me, a disturbing scenario, because in a civilized, modern, liberal society—one that deeply values the freedom of speech—the authority to say what we know is one power that should not be in the hands of the government. Every government regulation of online collaborative communities is a direct threat to the sovereignty of that community, and an implicit threat to the free speech of its members.

It is, therefore, extremely important that online projects, ones with any influence, be well-governed. We want to remove every excuse governments might have for exerting their own political authority. At this point I might argue that Wikipedia's governance has failed in various ways, but the root problem is that Wikipedia is absolutely committed to anonymous contribution; this ultimately makes it impossible to enforce many rules effectively. However much oppressive bureaucracy Wikipedia layers on, it will always be possible for people to sidestep rules, simply by creating a new identity. The unreliability of Wikipedia's enforcement of its own rules, in turn, provides a deep explanation of the unreliability of its information. The pretentious mediocrities and ideologues, as well as the powerful vested interests—generally, anyone with a strong motive to make Wikipedia articles read their way—can always create new accounts if they are ousted. Wikipedia's content will remain unreliable, and it will continue to have various public scandals, because its governance is unreliable. And this, I'm afraid, opens Wikipedia up to the threat of government regulation. I wouldn't wish that on them, of course, and I don't mean to give anyone ideas.

After all, if the Citizendium's more sensible system succeeds, it will have the power to do far more damage than Wikipedia can. To get an idea of the damage Wikipedia can do, consider another example. In late 2005, John Seigenthaler, Sr., long-time editor of the American newspaper The Tennessean, was accused in a Wikipedia article of being complicit in the assassination of John F. Kennedy. Well, it was rather easy for him to protect his reputation by pointing out publicly how unreliable Wikipedia is. He simply shamed Wikipedia, and he came off looking quite good.

But imagine that Seigenthaler were accused by some better, more reliable source. Then he couldn't have gotten relief in this way; he no doubt would have had to sue. I hate the thought, but I have to concede that it is barely possible that the Citizendium could be sued for defamation. After all, the effect of defamation by a more credible source would be much more serious. Then the government might be called in, and this worries me.

As I said, my horror scenario is that the Citizendium grows up to be as influential as its potential implies, only to be overregulated by zealous governments with a weak notion of free speech. As I said at the beginning of this talk, I think cyber-polities can generally regulate themselves. But communities with poor internal governance may well incur some necessary correction by governments, if they violate copyright on a massive scale or if they permit, irresponsibly, a pattern of libel. Why should this be disturbing to me? Government intervention is perhaps all right when we are talking about child molesters on MySpace; but when we are talking about projects to sum up what is known, that is when more serious issues of free speech enter in.

You can think of government intervention in something like Wikipedia or the Citizendium as akin to government intervention in the content of academic lectures and the governance of universities. When this happens, what should be an unimpeded search for the truth risks becoming politicized and politically controlled.

But you can imagine, perhaps, a series of enormous scandals on Wikipedia that has government leaders calling for the project to be taken over by the Department of Education, or by some private entity that is nevertheless implicitly answerable to the government. Wikipedia is far from being in such a position now, but it is conceivable. The argument would go as follows:

Wikipedia is not like a university or a private club. It is open to everyone, and its content is visible around the globe, via the Internet. Therefore, it is a special kind of public trust. It is not unlike a public utility. Moreover, it has demonstrated its utter incapacity to manage itself responsibly, and this of genuine public concern. The government is obligated, therefore, to place the management of Wikipedia in the care of the government.

End of argument. Nationalization might seem hard to conceive, but it has happened quite a bit in the last century. Why couldn't it happen to something that is already a free, public trust?

As both an academic (or former academic, anyway) and as an online project organizer, the thought of this scenario bothers me greatly, and in fact I must admit that I have given it no small amount of thought in the last few years. Fear of government intrusions on what should be a fully independent enterprise is one reason that I have spent so much time in the last year working on a sensible governance framework for the Citizendium. In short, the best protection against undue government interference in open content projects is good internal governance. So let me describe the Citizendium's current governance and its future plans.

The Citizendium works now under an explicit Statement of Fundamental Policies, which calls for the adoption of a Charter, not unlike a constitution, within the next few months. The Charter will no doubt solidify the governance system we are developing right now. This system involves an Editorial Council which is responsible for content policy; a Constabulary which gets new people on board and encourages good behavior; and a Judicial Board which will handle conflict resolution and appeals. While editors will make up the bulk of our Editorial Council, both authors and editors may participate in each of these bodies. Each of these bodies will have mutually exclusive membership, to help ensure a separation of powers, and there will be some other checks and balances. In addition, I as Editor-in-Chief am head of an Executive Committee. But to set a positive precedent, before even launching the Citizendium I have committed to stepping down within two to three years, so that we have an appropriate and regular succession of leadership.

Another perhaps interesting point concerns the Editorial Council. It has actually adopted a digitized version of Robert's Rules of Order, and we have passed five resolutions using e‑mail and the wiki exclusively. Recall that contributors must agree to uphold this system, as a condition of their participation. They must also be identified by their real-world identity if they wish to participate—although we will make exceptions in truly extraordinary cases.

I think you can recognize what we are trying to build: a traditional constitutional republic, but moved online. Only time will tell, but my hope is that this nascent governance structure will help us to avoid some of the problems that have beset not just Wikipedia, but a wide variety of Web communities.

I have covered a pretty wide variety of topics in my talk. I hope you have been able to follow the thread, at least a little; I doubt I have spent all the time I would need to make everything perfectly clear. But let me sum up my main argument anyway. Online communities, I say, are political entities. As such, they can govern their own "domestic" affairs, as well have various "foreign" or external effects. And so they can be democratic insofar as their members have authority internally or externally. I've discussed mainly one kind of authority, namely epistemic authority, or the authority over what society takes to be knowledge.

Then I pointed out that the external authority a project has depends on its internal governance—and so, the more externally influential, the more important it is that we get the internal governance right. I pointed to Wikipedia as an example of a cyber-polity that is not particularly well-governed. I worried a fair bit about the fallout, in terms of government regulation, that this might incur. In part to help avoid such fallout, I have briefly sketched a governance system that the Citizendium uses, which is a traditional constitutional, representative republic—mapped online.