How the Internet Is Changing What We (Think We) Know

A speech for "the locals"--Upper Arlington Public Library, January 23, 2008.  This is a more general discussion; the Citizendium is not mentioned once.

Information is easy, knowledge is difficult

There is a mind-boggling amount of information online. And this is a wonderful thing. I’m serious about that. A good search engine is like an oracle: you can ask it any question you like and be sure to get an answer. The answer might be exactly what you’re looking for, or it might be, well, oracular—difficult to interpret and possibly incorrect. I draw the usual distinction between knowledge and information. You can find information online very easily. Knowledge is another matter altogether.

Now, this is not something new about the Internet. It’s a basic feature of human life that while information is easy, knowledge is difficult. There has never been a shortage of mere data and opinion in human life. It’s a very old observation that the most ignorant people are usually full of opinions, while many of the most knowledgeable people are full of doubt. Other people are certainly sources of knowledge, but they are also sources of half-truths, confusion, misinformation, and lies. If we simply want information from others, it is easy to get; if we want knowledge in any strong sense of the word, it is very difficult. Besides that, long before the Internet, there was far more to read, far more television shows and movies to watch, than anyone could ever absorb in many lifetimes. Before the Internet, we were already awash in information. Wading through all that information in search of some hard knowledge was very difficult indeed.

Too Much InformationThe Internet is making this old and difficult problem even worse. If we had an abundance of information in, say, the 1970s, the Internet has created a superabundance of information today. Out of curiosity, I looked up some numbers. According to one estimate, there are now over 1.2 billion people online; Netcraft estimated that there are over 100 million websites, and about half of those are active. And those estimates come from over a year ago.

With that many people, and that many active websites, clearly there is, as I say, a superabundance of information. Nielsen ratings of Internet search showed that there were some six billion searches performed in December, 2007, in one month—that’s about 72 billion in a year! Google, by the way, was responsible for two thirds of those searches. Now, you might have heard these numbers before; I don’t mean to be telling you news. But I want to worry out loud about a consequence of this situation.

My worry is that the superabundance of information is devaluing knowledge. The more that information piles up on Internet servers around the world, and the easier it is for that information to be found, the less distinctive and attractive that knowledge will appear by comparison. I fear that the Internet has already greatly weakened our sense of what is distinctive about knowledge, and why it is worth seeking. I know this might seem rather abstract, and not something worth getting worked up about. Why, really, should you care?

It used to be that in order to learn some specific fact, like the population of France, you had to crack open a big thick paper encyclopedia or other reference book. One of the great things about the Internet is that that sort of searching—for very specific, commonly-sought-after facts—has become dead simple. Even more, there are many facts one can now find online that, in the past, would have taken a trip to the local library to find. The point is that the superabundance of information has actually made it remarkably easy to get information. Today, it’s easy not just to get some information about something or other, it’s easy to get boatloads of information about very specific questions and topics we’re interested in.

For all that, knowledge is, I’m afraid, not getting much easier. To be quite sure of an answer still requires comparing multiple sources, critical thinking, sometimes a knowledge of statistics and mathematics, and a careful attention to detail when it comes to understanding texts. In short, knowledge still requires hard thought. Sure, technology is a great time-saver in various ways; it has certainly made research easier, and it will become only more so. But the actual mental work that results in knowledge of a topic cannot be made much easier, simply because no one else can do your thinking for you. So while information becomes nearly instantaneous and dead simple, knowledge is looking like a doddering old uncle.

What do I mean by that? Well, you can find tons of opinions online, ready-made, but there is an interesting feature of a lot of the information and opinion you find online: not only is it easy to find, it is easy to digest. Just think of the different types of pages that a typical Web search turns up: news articles, which summarize events for the average person; blogs, which are usually very brief; Web forums, which only rarely go into depth; and encyclopedia articles and other mere summaries of topics. Of course, there are also very good websites, as well as the “Deep Web,” which contains things like books and journal articles and white papers; but most people do not use those other resources. The point is that most of the stuff that you typically find on the Internet is pretty lightweight. It’s Info Lite.

“Right,” you say, “what’s wrong with that? Great taste, less filling!” Sure, I like easy, entertaining information as much as the next guy. But what’s wrong with it is that it makes the hard work of knowledge much less appealing by comparison. For example, if you are coming to grips with what we should do about global warming, or illegal immigration, or some other very complex issue, you must escape the allure of all the dramatic and entertaining news articles and blog posts on these subjects. Instead, you must be motivated to wade through a lot of far drier material. The sources that are more likely to help you in your quest for knowledge look very boring by comparison. My point here is that the superabundance of information devalues knowledge, because the means of solid knowledge are decidedly more difficult and less sexy than the Info Lite that it is so easy to find online.

There is another way that the superabundance of information makes knowledge more difficult. It is that, for all the terabytes upon terabytes of information on the Internet, society does not employ many more (and possibly fewer) editors than it had before the advent of the Internet. When you go to post something on a blog or a Web forum, there isn’t someone called an editor who decides to “publish” your comment. The Internet is less a publishing operation than a giant conversation. But most of us still take in most of what we read fairly passively. Now, there’s no doubt that what has been called the “read-write Web” encourages active engagement with others online, and helps us overcome our passivity. This is one of the decidedly positive things about the Internet, I think: it gets people to understand that they can actively engage with what they read. We understand now more than ever that we can and should read critically. The problem, however, is that, without the services of editors, we need our critical faculties to be engaged and very fine-tuned. So, while the Internet conversation has instilled in us a tendency to read critically, still, without the services of editors, there is far more garbage out there than our critical faculties can handle. We do end up absorbing a lot of nonsense passively: we can’t help it.

In short, we are reading reams of content written by amateurs, without the benefit of editors, which means we must as it were be our own editors. But many of us, I’m afraid, do not seem to be prepared for the job. In my own long experience interacting with Internet users, I find heaps of skepticism and little respect for what others write, regardless of whether it is edited or not. Now, skepticism is all well and good. But at the same time, I find hardly anything in the way of real critical thinking. The very opinionated people I encounter online rarely demonstrate that they have thought things through as they should, given their strength of convictions. I have even encountered college professors who cite easy-to-find news articles in the commission of the most elementary of logical fallacies. So it isn’t necessarily just a lack of education that accounts for the problem I’m describing. Having “information at our fingertips,” clearly, sometimes makes us skip the hard thinking that knowledge requires. Even those of us who ought to know better are too often content to be impressed by the sheer quantity and instant availability of information, and let it substitute for their own difficult thought.

The nature and value of knowledge

Easy information devalues hard knowledge, I say. But so far I have merely been appealing to your understanding of the nature and value of knowledge. Someone might ask me: well, what do you mean by knowledge, anyway, that it is so different from mere information? And why does it matter?

Philosophers since Plato have been saying that knowledge is actually a special kind of belief. It must be true, first of all, and it must also be justified, or have good reasons or evidence to support it. For example, let’s suppose I read something for the first time on some random blog, such as that Heath Ledger died. Suppose I just uncritically believe this. Well, even if it’s true, I don’t know that it is true, because random blogs make up stuff all the time. A blog saying something really isn’t a good enough reason to believe it. But if I then read the news in a few other, more credible sources, then my belief becomes much better justified, and then I can be said to know.

Now, I don’t want to go into a lot of unnecessary details and qualifications, which I could, at this point. So let me get right to my point. I say knowledge is, roughly speaking, justified, true belief. Well then, I want to add that knowledge is difficult not because getting truth is difficult, but because justifying our beliefs is. In other words, it’s really easy to get truth. Google is a veritable oracle of truth. The problem is recognizing truth, and distinguishing it from falsehood. The ocean of information online contains a huge amount of truth. The difficulty comes in knowing when you’ve got it.

Well, that’s what justification is for. We use reasons, or evidence, to determine that, indeed, if we accept a piece of information, we will have knowledge, not error. But producing a good justification for our beliefs is extremely difficult. It requires, as I said before, good sources, critical thinking, sometimes a knowledge of statistics and mathematics, and a careful attention to detail when it comes to understanding texts. This all takes time and energy, and while others can help, it is something that one must do for oneself.

Here you might wonder: if justification, and therefore knowledge, is really so difficult, then why go to all the trouble? Besides, justification is not an all-or-nothing matter. How much evidence is needed before we can be said to know something? After all, if a blogger says that Heath Ledger is dead, that is at least some weak evidence that Heath Ledger is in fact dead. Do I really need stronger evidence? Why?

These are very difficult questions. The best brief answer is, “It depends.” Sometimes, if someone is just telling an entertaining story, it doesn’t matter at all whether it’s true or not. So it doesn’t matter that you know the details of the story; if the story entertains, it has done its job. I am sure that celebrity trivia is similar: it doesn’t matter whether the latest gossip in the Weekly World News about Britney Spears is true, it’s just entertaining to read. But there are many other subjects that matter a lot more. Here are two: global warming and immigration reform. Well, I certainly can’t presume to tell you how much evidence you need for your positions on these issues, before you can claim to have knowledge. Being a skeptic, I would actually say that we can’t have knowledge about such complex issues, or at least, not very certain knowledge. But I would say that it is still important to get as much knowledge as possible about these issues. Why? Quite simply because a lot is riding on our getting the correct answers, and the more that we study issues, and justify our beliefs, the more likely our beliefs are to be correct.

To passively absorb information from the Internet, without caring about whether we have good reasons for what we believe, is really to roll the dice. Like all gambling, this is pleasant and self-indulgent. But if the luck doesn’t go your way, it can come back to bite you.

Knowledge matters, and as wonderful a tool for knowledge as the Internet can be, it can also devalue knowledge. It does so, I’ve said, by making passive absorption of information seem more pleasant than the hard work of justifying beliefs, and also by presenting us with so much unedited, low-quality information that we cannot absorb it as carefully as we would like. But there is another way that the Internet devalues knowledge: by encouraging anonymity. So here’s a bit about that.

Knowledge and anonymity

We get much of our knowledge from other people. Of course, we pick some things up directly from conversation, or speeches like this one. We also read books, newspapers, and magazines; we watch informational television programs; and we watch films. In short, we get knowledge either directly from other people, or indirectly, through various media.

Now, the Internet is a different sort of knowledge source. The Internet is very different, importantly different, from both face-to-face conversation and from the traditional media. Let’s talk about that.

The Internet has been called, again, a giant conversation. But it’s a very unusual conversation, if so. For one thing, it’s not a face-to-face conversation. We virtually never have the sort of “video telephone” conversations that the old science fiction stories described. In fact, on many online knowledge websites, we often have no names, pictures, or any information at all, about the people that we converse or work with online. Like the dog in the famous New Yorker cartoon said, “On the Internet, nobody knows you’re a dog.”

In the three-dimensional online virtual world, Second Life, there is an elaborate system in which you can choose the precise physical characteristics for the person you are online—your “avatar.” Not surprisingly, in Second Life, there are a lot more beautiful and striking-looking people than there are in “First Life”—real life. This practice of make-believe is very self-conscious, and many academic papers have been written about how “identity” is “constructed” online in general.

When I went to make an avatar for myself for Second Life a few years ago, I was pretty uncomfortable representing myself as anything other than what I am. So I actually made an avatar that looks like me. (I didn’t really get it right.) I’ve always been personally uncomfortable representing myself online in any other way than how I really am. But I realize that I am unusual in this regard. Obviously, privacy matters.

Now, think of this. People who care very much about getting their facts right generally consult authoritative sources; they don’t usually get their knowledge from casual conversation with friends and relatives. But at least, when we do get knowledge from a friend or relative, we have some idea of how reliable they are. Maybe you have an eccentric acquaintance, for instance, who is a conspiracy theorist, and he doesn’t spend a lot of time considering the merits of his sources, or the plausibility of their claims. Let’s say you also know that he barely got through high school and basically doesn’t care what the mainstream media or college professors say. Your acquaintance may have many fascinating factoids and interesting stories, but probably, you aren’t going to take what he says very seriously.

But imagine if you were chatting online about politics or UFOs, or other weird stuff, with someone you didn’t know was actually your acquaintance. You might actually take him more seriously in that case. You might take his bizarre claims somewhat more seriously. I don’t mean that you would simply believe them—of course you wouldn’t—but you would not have any specific reasons to discount them, as you would if you knew you were talking to your acquaintance. Your only positive reason to discount the claims would be: I don’t know this person, this person is anonymous. But you know that there can be brilliant and reliable people anonymous online, as well as thoroughly unreliable people.

Well, I think many of us would actually trust an anonymous person more than we would trust our more eccentric acquaintances. Now don’t get me wrong, I don’t mean to accuse anyone of being a dupe. Of course, we are able to spot really daft stuff no matter who it comes from. But without knowing who a person is, we are operating without a basic bit of information that we are used to having, in evaluating what people tell us face-to-face. If we lack any information at all about how reliable a source is, we will not simply conclude that the source is completely unreliable; we will often give the person the benefit of the doubt. And that is sometimes more respect than we would give the person if we knew a few basic facts about him or her.

More generally, there is a common attitude online that it is not supposed to matter, in fact, who you are. We are all perfectly equal in many online communities, except for what we say or do in those communities. Who we are offline is not supposed to matter. But it does matter, when it comes to evaluating what people say about offline topics, like science and politics. The more time we spend in the Internet’s egalitarian communities, the more contempt we might ultimately have for information about a person’s real-world credibility. The very notion of personal credibility, or reliability, is ultimately under attack, I think. On a certain utopian view, no one should be held up as an expert, and no one should be dismissed as a crackpot. All views, from all people, about all subjects, should be considered with equal respect.

Danger, Will Robinson! Personal credibility is a universal notion; it can be found in all societies and throughout recorded history. There is a good reason that it is universal, as well: knowledge of a person’s credibility, or lack thereof, is a great time-saver. If you know that someone knows a lot about a subject, then that person is, in fact, more likely to be correct than some random person. Now, the expert’s opinion cannot take the place of thought on your part; usually, you probably should not simply adopt the expert’s opinion. It is rarely that simple. But that doesn’t mean the information about personal credibility is irrelevant or useless.

Two ideas for a solution

So far, I have mainly been criticizing the Internet, which you might find it odd for me to do. After all, I work online.

I don’t think that the Internet is an unmitigated bad influence. I won’t bore you by listing all the great things there are about the Internet, like being able to get detailed information about every episode of Star Trek, without leaving home, at 3 AM. Besides, I have only focused on a small number of problems, and I don’t think they are necessarily Earth-shatteringly huge problems, either. But they are problems, and I think we can do a little bit to help solve them, or at least mitigate them.

First, we can make a role for experts in Internet communities. Of course, make the role so that does not conflict with what makes the community work. Don’t simply put all the reins of authority in the hands of your experts; doing that would ensure that the project remains a project by and for experts, and of relatively little broader impact. But give them the authority to approve content, for example, or to post reviews, or other modest but useful tasks.

My hope is that, when the general public work under the “bottom up” guidance of experts, this will have some good effects. I think the content such a community might produce would be more reliable than the run of the mill on the Internet. I would also hope that the content itself will be more conducive to seeking knowledge instead of mere information, simply by modelling good reasoning and research.

I do worry, though, that if expert-reviewed information online were to become the norm, then people might be more likely to turn off their critical faculties.

Second, we can create new communities, in which real names and identities are expected, and we can reward people in old communities for using their real names and identities. This is something that has done, for example, with its “real name” feature on product reviews. If contributors are identified, we could use the same sort of methods to evaluate what they say online, that we would use if we were to run into them on the street.

I began by laying out a general problem: superabundance of information online is devaluing knowledge. I don’t know if we can really solve this problem, but the two suggestions I just made might go a little way to making it a little better. If we include a modest role for experts in more of our Internet communities, we’ll have better information to begin with, and better role models. Moreover, if we identify the sources of our information, we will be in a better position to evaluate it.

The New Politics of Knowledge

Speech delivered at the Jefferson Society, University of Virginia, Charlottesville, Virginia, November 9, 2007, and at the Institute of European Affairs, Dublin, Ireland, September 28, 2007, as the inaugural talk for the IEA's "Our Digital Futures" program.

I want to begin by asking a question that might strike you as perhaps a little absurd. The question is, "Why haven't governments tried to regulate online communities more?" To be sure, there have been instances where governments have stepped in. For instance, in January of last year in Germany, the father of a deceased computer hacker used the German court system to try to have an article about his son removed from the German Wikipedia. As a result, actually went offline for a brief period. It's come back online, of course, and in fact the article in question is still up.

Here's another example. In May of last year, attorneys general from eight U.S. states demanded that MySpace turn over the names of registered sex offenders lurking on the website, which as you probably know is heavily frequented by teenagers. The website deleted pages of some 7,000 registered sex offenders. And the following July, they said that in fact some 29,000 registered sex offenders had accounts, which were subsequently deleted.

Those are just a few examples. But we can make some generalizations. The Internet is famously full of outrageously false, defamatory, and offensive information, and is said to be a haven for criminal activity. This leads back to the question I asked earlier: why haven't governments tried to regulate online communities even more than they have?

We might well find this question a little absurd, especially if we champion the liberal ideals that form the foundation of Western civil society. Indeed, no doubt one reason is our widespread commitment to freedom of speech. But consider another possible reason—one that, I think, is very interesting.

Governments, and everyone else, implicitly recognize that social groups, however new and different, have their own interests and are usually capable of regulating themselves. It is a truly striking thing that people come together from across the globe and, out of their freely donated labor and strings of electrons, form a powerful new corporate body. When they do so—as I have repeatedly observed—they develop a sense of themselves as a group, in which they invest some time and can take some pride, and which they govern by rules.

In fact, these groups are a new kind of political entity, the birth of which our generation has been privileged to witness. Such groups are not like supra-national organizations, like the United Nations; nor are they like international aid organizations, like Doctors Without Borders; nor are they quite like international scientific groups, like the Intergovernmental Panel on Climate Change. The existence and primary activity of these online communities is all online. Their membership is self-selecting, international, and connected online in real time. This makes it possible for enormous numbers and varieties of groups to arise, of arbitrary size and arbitrary nationality, to achieve arbitrary purposes. They essentially make up a new kind of political community, a cyber-polity if you will, and so there is a presumption that they can regulate themselves. Government steps in, as in the case of MySpace, only when they cannot regulate themselves responsibly.

The idea that online communities are a kind of polity is, I think, very suggestive and fruitful. I want to talk in particular about how online communities, considered as polities, are engaged in a certain new kind of politics—a politics of knowledge. Let me explain what I mean by this.

Speaking of a "politics of knowledge," I assume that what passes for knowledge, or what we in some sense take ourselves to know as a society, is determined by those who have authority or power of a sort. You don't of course have to like this situation, and you might disagree with the authorities, or scoff at their authority in some cases. Nevertheless, when for example professors at the University of Virginia say that something is well known and not seriously doubted by anyone who knows about the subject, those professors are in effect establishing what "we all know," or what we as a society take ourselves to know. Since those professors, and many others, speak from a position of authority about knowledge—a powerful force in society—surely it makes some sense to speak of a politics of knowledge. I just hope you won't understand me to be saying that what really is known, in fact, is determined by whoever happens to be in authority. I'm no relativist, and I think the authorities can be, and frequently are, wrong.

If we talk about a politics of knowledge, and we take the analogy with politics seriously, then we assume that there is a sort of hierarchy of authority, with authority in matters of knowledge emanating from some agency that is "sovereign." In short, if we put stock in the notion of the politics of knowledge, then we're saying that, when it comes to knowing stuff, some people are at the top of the heap.

Our new online communities—our cyber-polities—are increasingly influential forces, when it comes to the politics of knowledge. When Wikipedia speaks, like it or not, people listen. So in this talk I want to discuss in particular something I call the new politics of knowledge. Any talk of a new politics of knowledge raises questions about what agency is sovereign. Well, it is often said that in the brave new world of online communities, everyone is in charge. Time Magazine's "Man of the Year" is, by practice, usually some influential political figure. When its "Person of the Year" last year was "You," Time didn't break its practice. Time was rightly claiming that, through Internet communities we are all newly empowered. In the new politics of knowledge, we can all, through blogs, wikis, and many other venues, compete with real experts for epistemic authority—for power over what is considered to be known.

If this sounds like a political revolution, that's because it is. It is frequently described as a democratic revolution. So what I'm going to do in the rest of this talk is examine exactly what sense in which the new cyber-polities, like Wikipedia, do indeed represent a sort of democratic revolution. This discussion will have the interesting result that we should be more concerned than we might already be about the internal governance of Internet communities—because that internal governance has real-world effects. And I will conclude by making some recommendations for how cyber-polities should be internally governed.

As a philosopher, I find myself impelled to ask: what exactly is democratic about the so-called Internet revolution?

Democracy in one very basic sense means that sovereignty rests ultimately with the people, that is, with all of us. Bearing that in mind, the new Internet revolution might be democratic, I think, both in a narrow sense and in a broad sense. The narrow sense concerns project governance: the new content production systems are themselves governed ultimately by the participants, and for that reason can be called democratic. In the broad sense, the Internet revolution gives everyone "a voice" which formerly many did not have, a stake in determining "what is known" not just for a narrow website or Internet practice, but for society as a whole. To draw the distinction by analogy, we might say that each online community has a domestic policy, about its own internal affairs, and a foreign policy, through which it manages its influence on the world at large.

Now, I'd like to point something out that you might not immediately notice. It is that the broad sense depends in a certain way on the narrow sense. The contributors are ultimately sovereign in various Internet projects, and that is precisely why they are able to have their newfound broader influence over society. Let's take as an example. This is a website that allows people to post any link, and then others vote, a simple up or down, on whether they "digg" the link. It's one person, one vote. Of course, no one checks anybody's credentials on Digg. The highest-voted links are placed most prominently on the website. So the importance of a Web article, and presumably whatever the article has to say, is determined democratically, at least as far as the Digg community goes. But Digg's influence goes beyond its own community. A relatively obscure story can become important by being highly rated on Digg. In this way, all those people voting on Digg—and these can be as expert as you hope, or as uneducated, ignorant, biased, immature, and foolish as you fear—they can wield a power to highlight different news stories, a power hitherto usually reserved only to professional journalists.

Similarly, Wikipedia articles are now well-known for being the #1 Google search result for many popular searches. Any website with that much reach is, like it or not, very influential. That is, in effect, practical epistemic authority. That is real authority, given to anyone who has the time and patience to work on Wikipedia and do the hand-to-hand battle necessary to get your edits to "stick" in Wikipedia articles. That power, to define what is known about a general topic, was formerly reserved only to the professional intellectuals who wrote and edited encyclopedias, and more broadly to experts generally speaking. And again, of course, no one checks anybody's credentials before they get on Wikipedia. So amateurs are to some extent displacing experts, in the new politics of knowledge.

So that's why we call the Internet revolution democratic. But this needs some qualification. There is one fundamental reason that we describe as "democratic" such websites as Digg, Wikipedia, MySpace, YouTube, and all the rest, and that is that anyone can, virtually without restriction, go to the website and get involved. This, however, is only to say that they have a certain benchmark level of "user empowerment," which we might call the "right to contribute." But frequently, a large variety of governance structures are superimposed upon this basic "right to contribute." While the content is generally determined by largely self-governing contributors, some policies and decisions are left in the hands of the website owners, like Slashdot and YouTube, who are officially answerable to no one else within the project. Granted, if these privileged persons anger their contributors, the contributors can vote with their feet—and this has happened on numerous occasions. And in some cases, such as Wikipedia, the community is almost completely self-governing. Still, we probably should qualify claims about the democratic nature of cyber-polities: just because there is a basic right to contribute, it does not follow that there will also be an equal right to determine the project's internal governance.

So, as I said before, the Internet revolution is democratic in the broad sense because it is democratic, however qualifiedly, in the narrow sense. In other words, internal Web project governance bears directly on real-world political influence. But how closely connected are Web community politics and real-world influence?

Consider Wikipedia again—and I think this is particularly interesting. If you've followed the news about Wikipedia at all in the last few years, you have might noticed that when they make larger changes to their policy, it is no longer of interest just to their contributors. It is of interest to the rest of the world, too. It gets reported on. Two recent news items illustrate this very well.

First item. A few months ago, a student posted a website, called the WikiScanner, that allows people to look up government agencies and corporations to see just who has been editing which Wikipedia articles. This was fairly big news—all around the world. I was asked to comment about the story by reporters in Canada and Australia. Journalists think it's absolutely fascinating that someone from a politician's office made a certain edit to an article about that politician, or that a corporation's computers were used to remove criticisms about the corporation. At the same time, reporters and others observe that Wikipedia's anonymity has allowed people to engage in such PR fiddling with impunity. And that is the interesting internal policy point: anyone can contribute to Wikipedia without identifying him- or herself. You can even mask your IP address, which those political aids and corporation employees should have done; all they had to do was make up some random username, which one can still do without giving Wikipedia an e-mail address, and then the WikiScanner couldn't track the IP address. Nobody who was signed in was caught by the WikiScanner. Anyway, it was an internal policy that has had some very interesting external ramifications.

Second item. It was reported recently by the London Times that the German Wikipedia would be changing its editing system. In the future, all edits by unregistered and newer contributors will have to be approved by the older contributors before they can appear on the website. In fact, this was old news—the system described has been under development for well over a year, and it still hasn't been put into use. Nevertheless, it has been touted as a very big concession on the part of Wikipedia. It's said now that Wikipedia has a role for "trusted editors" on the website, but this is incorrect; it has a role only for people who have been in the system for a while, and these can be very untrustworthy indeed. However unlikely this is to have any significant effect, it was still touted as important news. And again, what was touted as big news was a change in internal policy, the policy about how the wiki can be edited by newer and anonymous contributors. This is supposed to be important, because it might help make Wikipedia a more responsible global citizen.

In general, it is becoming increasingly clear that the "domestic policy," so to speak, of cyber-polities is closely connected with their real-world impact. Wikipedia isn't the only example I might give. Here's another—although in this case, the effect is economic, not epistemic. There is an amazingly huge website, called craigslist, which lists, they say, over 12 million new classified ads every month. This website has proven to be a real thorn in the side of local newspapers, which depend on revenue from ads. Increasingly, people are posting their classified ads in craigslist instead of in their local newspapers. This is the effect of a policy, an internal policy, that anyone can post an ad for free, except for employment ads in certain markets. What might have originally seemed to be an optional feature of a small Web community has turned out, in fact, to cost jobs at newspapers.

But let's get back to the politics of knowledge. In the intellectual sphere, I think the full power of collaboration and aggregation has yet to be demonstrated. Try to imagine Wikipedia done right—not just enormous, but credible and well-written. If this sounds impossible to believe, consider that just a few years ago, Wikipedia itself, a reasonably useful general encyclopedia with over two millions articles in English, would have sounded equally impossible to believe. I can tell you that, when Wikipedia was first starting out, there were many people who sneered that we didn't have a chance.

Let me describe briefly my new project, which is relevant here. It is called the Citizendium, or the Citizens' Compendium. It is a non-profit, free wiki encyclopedia that invites contributions from the general public—and to that extent it's like Wikipedia. There are three very important differences, however. First, we require the use of real names and do not allow anonymous contribution; we also require contributors to submit at least a brief biography. So we all know who we're actually working with. Second, we distinguish between rank-and-file authors, which do not require any special qualifications, and editors, who must demonstrate expertise in a field; our editors may approve articles, and they may make decisions about content in their areas of expertise. Still, they work side-by-side with authors on the wiki. Nobody assigns anybody any work; it's still very much a bottom-up process. Third, we are a rather more mature community. All contributors must sign onto a sort of social contract, which states the rules of the community; we expect people to behave professionally; and we have people called "constables" who are actually willing to enforce our rules by kicking out troublemakers.

So how is the project going? We started a pilot project just over a year ago, and in that time we created 3,500 articles, and we have over 2,000 authors and well over 200 expert editors on board. We also have more words than Wikipedia did after its first year—our average article is six times as long as the average Wikipedia article after its first year. Our pace of article production has accelerated—it has doubled in the past 100 days or so and tripled since last January. And we are pretty much free of vandalism, and I think our articles are pretty high-quality for such a wide-open project. The project is doing rather well, and I think that we are probably, with continued development, poised to replicate Wikipedia's sort of growth. We too could have a million articles in under ten years.

Well, imagine that the Citizendium had a million articles, together with loads of ancillary reference material such as images, tables, tutorials, and so forth—all free, credible, and managed by experts. The sort of influence that such a website would wield would, I think, far outweigh Wikipedia's. The one thing that really holds Wikipedia back, from the end user's perspective, is its reliability. So suppose there were a similar website that solved that problem.

If you ask me, this is somewhat of a frightening prospect. After all, already, far too many students and even members of the general public treat Wikipedia as if it were reliable. Already, for far too many students, Wikipedia is their only source of reference information. If humanity were to produce a similarly giant encyclopedia that were really reliable, you can just imagine how it would probably be received by the general public. It would become, essentially, the world's textbook and omnipresent reference library. There would be a general presumption that what it says is correct, and that if anyone asserts something in contradiction to it, they would have to explain in as much detail as they would have to do if they contradicted the Encyclopedia Britannica today. Sure, a good encyclopedia can be wrong; but it usually isn't. Unlike Wikipedia, it's innocent until proven guilty.

This is frightening, I say, precisely because of how powerful such a resource would be. Imagine the article about, for example, the Iraq War, after it had been written and rewritten, and checked and rechecked, by hundreds of real experts. It would no doubt be a thing of beauty, as I think the Citizendium's best articles are. But it would also be taken as the starting-point for serious conversation. What claims it makes could have real-world political ramifications, as much as, if not more than, any U.N. report. So you can easily imagine the attention given to major changes of policy, or to internal rulings on controversial cases in the project. Again: the internal policymaking for a truly successful collaborative reference project would have major external consequences.

We don't want governments to take over or closely regulate collaborative projects, but if they continue to act as irresponsibly as Wikipedia has, I fear that they might attempt to do so. That is, for me, a disturbing scenario, because in a civilized, modern, liberal society—one that deeply values the freedom of speech—the authority to say what we know is one power that should not be in the hands of the government. Every government regulation of online collaborative communities is a direct threat to the sovereignty of that community, and an implicit threat to the free speech of its members.

It is, therefore, extremely important that online projects, ones with any influence, be well-governed. We want to remove every excuse governments might have for exerting their own political authority. At this point I might argue that Wikipedia's governance has failed in various ways, but the root problem is that Wikipedia is absolutely committed to anonymous contribution; this ultimately makes it impossible to enforce many rules effectively. However much oppressive bureaucracy Wikipedia layers on, it will always be possible for people to sidestep rules, simply by creating a new identity. The unreliability of Wikipedia's enforcement of its own rules, in turn, provides a deep explanation of the unreliability of its information. The pretentious mediocrities and ideologues, as well as the powerful vested interests—generally, anyone with a strong motive to make Wikipedia articles read their way—can always create new accounts if they are ousted. Wikipedia's content will remain unreliable, and it will continue to have various public scandals, because its governance is unreliable. And this, I'm afraid, opens Wikipedia up to the threat of government regulation. I wouldn't wish that on them, of course, and I don't mean to give anyone ideas.

After all, if the Citizendium's more sensible system succeeds, it will have the power to do far more damage than Wikipedia can. To get an idea of the damage Wikipedia can do, consider another example. In late 2005, John Seigenthaler, Sr., long-time editor of the American newspaper The Tennessean, was accused in a Wikipedia article of being complicit in the assassination of John F. Kennedy. Well, it was rather easy for him to protect his reputation by pointing out publicly how unreliable Wikipedia is. He simply shamed Wikipedia, and he came off looking quite good.

But imagine that Seigenthaler were accused by some better, more reliable source. Then he couldn't have gotten relief in this way; he no doubt would have had to sue. I hate the thought, but I have to concede that it is barely possible that the Citizendium could be sued for defamation. After all, the effect of defamation by a more credible source would be much more serious. Then the government might be called in, and this worries me.

As I said, my horror scenario is that the Citizendium grows up to be as influential as its potential implies, only to be overregulated by zealous governments with a weak notion of free speech. As I said at the beginning of this talk, I think cyber-polities can generally regulate themselves. But communities with poor internal governance may well incur some necessary correction by governments, if they violate copyright on a massive scale or if they permit, irresponsibly, a pattern of libel. Why should this be disturbing to me? Government intervention is perhaps all right when we are talking about child molesters on MySpace; but when we are talking about projects to sum up what is known, that is when more serious issues of free speech enter in.

You can think of government intervention in something like Wikipedia or the Citizendium as akin to government intervention in the content of academic lectures and the governance of universities. When this happens, what should be an unimpeded search for the truth risks becoming politicized and politically controlled.

But you can imagine, perhaps, a series of enormous scandals on Wikipedia that has government leaders calling for the project to be taken over by the Department of Education, or by some private entity that is nevertheless implicitly answerable to the government. Wikipedia is far from being in such a position now, but it is conceivable. The argument would go as follows:

Wikipedia is not like a university or a private club. It is open to everyone, and its content is visible around the globe, via the Internet. Therefore, it is a special kind of public trust. It is not unlike a public utility. Moreover, it has demonstrated its utter incapacity to manage itself responsibly, and this of genuine public concern. The government is obligated, therefore, to place the management of Wikipedia in the care of the government.

End of argument. Nationalization might seem hard to conceive, but it has happened quite a bit in the last century. Why couldn't it happen to something that is already a free, public trust?

As both an academic (or former academic, anyway) and as an online project organizer, the thought of this scenario bothers me greatly, and in fact I must admit that I have given it no small amount of thought in the last few years. Fear of government intrusions on what should be a fully independent enterprise is one reason that I have spent so much time in the last year working on a sensible governance framework for the Citizendium. In short, the best protection against undue government interference in open content projects is good internal governance. So let me describe the Citizendium's current governance and its future plans.

The Citizendium works now under an explicit Statement of Fundamental Policies, which calls for the adoption of a Charter, not unlike a constitution, within the next few months. The Charter will no doubt solidify the governance system we are developing right now. This system involves an Editorial Council which is responsible for content policy; a Constabulary which gets new people on board and encourages good behavior; and a Judicial Board which will handle conflict resolution and appeals. While editors will make up the bulk of our Editorial Council, both authors and editors may participate in each of these bodies. Each of these bodies will have mutually exclusive membership, to help ensure a separation of powers, and there will be some other checks and balances. In addition, I as Editor-in-Chief am head of an Executive Committee. But to set a positive precedent, before even launching the Citizendium I have committed to stepping down within two to three years, so that we have an appropriate and regular succession of leadership.

Another perhaps interesting point concerns the Editorial Council. It has actually adopted a digitized version of Robert's Rules of Order, and we have passed five resolutions using e‑mail and the wiki exclusively. Recall that contributors must agree to uphold this system, as a condition of their participation. They must also be identified by their real-world identity if they wish to participate—although we will make exceptions in truly extraordinary cases.

I think you can recognize what we are trying to build: a traditional constitutional republic, but moved online. Only time will tell, but my hope is that this nascent governance structure will help us to avoid some of the problems that have beset not just Wikipedia, but a wide variety of Web communities.

I have covered a pretty wide variety of topics in my talk. I hope you have been able to follow the thread, at least a little; I doubt I have spent all the time I would need to make everything perfectly clear. But let me sum up my main argument anyway. Online communities, I say, are political entities. As such, they can govern their own "domestic" affairs, as well have various "foreign" or external effects. And so they can be democratic insofar as their members have authority internally or externally. I've discussed mainly one kind of authority, namely epistemic authority, or the authority over what society takes to be knowledge.

Then I pointed out that the external authority a project has depends on its internal governance—and so, the more externally influential, the more important it is that we get the internal governance right. I pointed to Wikipedia as an example of a cyber-polity that is not particularly well-governed. I worried a fair bit about the fallout, in terms of government regulation, that this might incur. In part to help avoid such fallout, I have briefly sketched a governance system that the Citizendium uses, which is a traditional constitutional, representative republic—mapped online.