Why is consciousness mysterious?

So why, precisely, is consciousness mysterious? What is it, anyway? My view on this, in short, is that the weirdness, the mysteriousness, of consciousness lies primarily in the fact that it is an event, an activity, which is a kind of property of the brain. Much of the ontological weirdness of consciousness stems from the fact that properties (and events) have the same sort of weirdness we puzzle over when we think about the problem of universals. Just as properties aren’t things with spatial dimensions, so various mental events and properties aren’t things with dimensions.

When people open up heads and examine brains, they should no more expect to see thoughts bouncing around than when they hold a ball in hand and expect to see the abstract properties of roundness or redness. You can only see the object which is round and red. Like the abstract roundness and redness that the ball exemplifies, thoughts aren’t things. They are properties of a certain kind, i.e., they’re events, they are happenings or goings-on.

Ah, you say, that’s too quick. We can see instances of some properties and instances of some events, to be sure; we can see this ball’s redness and that it’s rolling. But, you cleverly add, there is no way anyone will ever perceive an instance of someone else’s consciousness in the way the conscious person is aware of it. So consciousness is an unusual sort of property (or event), to be sure. I readily admit that. An outside observer can’t observe consciousness going on in the way that the person who is conscious can. But that’s because, unlike every other kind of property, we are familiar with mental or conscious properties through introspection. Introspection is part of our equipment. A ball can’t (as far as we know) introspect and reflect on anything about itself. I can introspect and infer that you have similar thoughts, perceptions, and pleasures and pains to mine; but never will I, through introspecting, become aware of your thoughts, perceptions, and pleasures and pains. (That is, unless such a thing as “mind-reading” exists, which I doubt.)

Do we need to posit the existence of another ontological category (the irreducibly mental) in order to account for the “raw feels” or “qualia” of introspected consciousness? Well, no, we don’t actually. We know through research into the brain that certain thoughts, perceptions, and pleasures and pains—and here it’s hard to know what words to use—”are mapped onto” or “are caused by” or “have the underlying substrate of” certain sorts of brain events. If no perceptible brain events (of one sort), then no thoughts (of a kind); and if no thoughts (of that kind), then no perceptible brain events (of that sort). So when an MRI shows a certain area of the brain lighting up, you aren’t seeing a memory, because a memory is known and understood, irreducibly, by introspection. You can see evidence that a memory is taking place, though. Sufficiently advanced brain science might even indicate what the memory is of. But our perception or apprehension of the brain event will still be different from the introspective experience of the memory, it will not be the same as its raw feel or quale.

If you insist that this means I’m a dualist, because I’m saying something is irreducibly introspective (or mental) after all, then I’ll say that the irreducibility is similar to the irreducibility, again, of properties or events. It makes no more sense to say that a thought is some physical thing than that a property is a physical thing. It isn’t a thing at all. It’s a different ontological category, yes, but not because it’s mental, but because it’s an event (or a property).

Some part of the difficulty that some philosophers have with the mind-body problem, I’m convinced, is owing to a rather simple materialistic model of the universe: everything that exists is some physical object. But when you point out that there are, after all, physical properties, relations, events, sets or groups, etc., then they say, oh, well that’s a different problem. At least they’re all physical. Sure, but what makes them physical? That they are reducible to fundamental particles? Well, no. The color or weight or density of a rock is not reducible to fundamental particles, because properties can never be reducible to things. Properties are ontologically basic.

Once you start taking seriously the notion that there are a fair few (not an enormous number of) irreducibly basic concepts, concepts that cannot be semantically reduced, analyzed, or defined in terms of other things, then it becomes quite easy to say, “Well, mental properties are properties of bodies, because it’s bodies that have such properties, but we (the havers of those bodies) are acquainted with such properties only via introspection.”

If you have your wits about you, you will see another opening now. You will press me then to distinguish between the properties known by introspection versus those that aren’t, or to define “introspection” without reference to some irreducibly mental feature. Maybe we could, armed with such a definition, invent a self-aware AI, or decide whether some AI really were self-aware.

To that I answer: that’s a scientific, not a philosophical, question. It’s a question about the brain, or about systems that share whatever feature brains have that makes them (sometimes) exhibit consciousness. I suppose brain science is getting closer and closer to an answer all the time. All a person can tell you is when he is conscious and of what he is conscious (and notice, if he’s telling you that, then not only is he conscious of something, he is introspecting that he is conscious of it). Then a scientist, wielding these reports, can gather the MRI (or whatever) evidence that is needed to see what distinguishes the brain events that are accompanied by consciousness (and introspection) from those that aren’t.

So when someone like Daniel Dennett (a philosopher I read before he was famous and cool) declares that consciousness doesn’t exist, my reaction is to say that it’s an overreaction to a hard problem that is poorly understood.


by

Posted

in

Comments

Please do dive in (politely). I want your reactions!

7 responses to “Why is consciousness mysterious?”

  1. Thank you for this interesting reflection on consciousness

    The analysis of consciousness you’ve offered is very reminiscent, however of Descartes: I think he talks in precisely those terms in the “Meditations” to distinguish extended reality (body) from those other objects of consciousness (mind) to which only introspection has access. But, not to worry: I’m convinced most well-known theorists today–and a great many of them are neuroscientists and neurophysiologists–are still beholden to Descartes.

    You might find Roger Scruton instructive here: https://www.thenewatlantis.com/publications/my-brain-and-i?fbclid=IwAR1ffrNNGwaM2ogOT8AkZJqyhHq7gG-_NKbwu5zRxC1r9swjc8LyaCtpiwk

    In any event, when it comes to this topic I’ve always considered myself a disciple of David Chalmers.

    1. Thanks for the comment, but my solution is nothing like Descartes’. Descartes thought the mind, like the body, was a substance, i.e., something in which properties inhere. I am saying, to the contrary, that mental events (like having a pain) and thoughts (like the cogito) happen within the gray matter. Descartes thought they happened within the mind and were themselves wholly mental.

      It is true that (like many philosophers) Descartes spoke of introspection (sometimes philosophers use the term “reflection”) as the way we can apprehend the contents of the mind, but he wasn’t unusual in this regard. I definitely fall into that tradition.

      My view is probably closest to (maybe identical to, I just haven’t studied it enough recently—but I doubt it) the property dualism of Donald Davidson.

  2. If “mental events (like having a pain) and thoughts (like the cogito) happen within the gray matter” then presumably MRI can capture them, leaving you with the typical “neurophilosopher’s reductionism: thoughts, emotions, acts of introspection just are so many records of brain activity and nothing more. I think you’re best bet in this case is to say that human thoughts can be accessed only indirectly through brain-imaging techniques.

    Which, of course, begs the question, “What are thoughts I’m having and that the MRI captures as visible neuronal firing?”

    It’s sort of the way astrophysicists, for example, don’t rely only on visual introspection of the night sky–do scientists still look through those huge telescopes?– but the computer programs that allow the galaxy clusters the unaided eye can never detect to be digitized, recorded and processed for study.

    Which begs the question, “What are those galaxy clusters the unaided eye can’t see and that only computer models and graphs can record?”

    Are the radiation, dark matter and vacuum of space the equivalent here of ideas?

  3. Btw. I meant to say “your best bet” and “on visual inspection” in my previous post. My apologies.

  4. As a corollary to my previous post, I’d like to say that my position on the nature of consciousness is a “functionalist” one (after Ray Jackendoff). Let’s say that the mental and physical are “identical” and, after Searle, that the only way to talk about them will have to be through a language of descriptions. ‘Qualia’ or direct accounts of experience (the evidence for them through brain-imaging techniques) and the reportings of them (“I see a red circular ball”) really just amount to two ways of talking about the same experience.

    Experience of a red circular ball and brain-states (however recorded) are just different ways to access the same conscious experience.

    “Functionalism” takes us to a discussion of the function of brain states rather than metaphysical entities like thoughts, emotions, intuitions experienced in the brain; to the “computational mind” rather than any little Lockean person, sitting inside our heads, peering out at the world through a veil of primary and secondary qualities.

    We can, by extension, use AI-models of intelligence as an analogy to the way humans think–without equating or making one a model for the other– and speak of minds as machines. The way the mind acts on, reports and interacts with “qualia” or sensory data is machine-like, reduced to a pretty technical language of processes and outcomes.

    The conscious mind is aware only of mental processes that register reality in the way computers and high-level AI-devices do. The mistake made by AI enthusiasts, however, is to equate AI with human intelligence precisely because of their functional similarities.

    This point was driven home lately when the IBM-made debating machine (“Project Debater”), with access to unlimited data and the most sophisticated information algorithms, lost to human champion debater Harish Natarajan. It’s reasonable to suppose that Project Debater couldn’t–as a champion debater could–tweak or nuance debate topic-related information in a way to win over the debating audience.

  5. Hi Larry,

    We don’t need to posit the existence of another ontological category; we just need to think clearly and qualitatively about the physics and “qualia” we already know about. Everyone only provides functional definitions of the qualitative word “red” but this provides no qualitative meaning to such words. In order to know qualitatively, what the word “red” means, you must indicate which physical properties or qualities it is a label for.

    Everything, objective, is merely abstract descriptions of physics. It is all devoid of qualitative meaning. For example, we know the name of the neurotransmitter glutamate, and everything about how glutamate behaves in a synapse, but what is that glutamate behavior qualitatively like, should we experience it directly, subjectively?

    The only thing that provides any qualitative meaning is the subjective, i.e a redness quale we can experience. Is it not a hypothetical possibility that the objective label “glutamate” and the subjective label “redness” are qualitative labels for the same thing?

    There is an emerging expert consensus coalescing around these ideas over at Canonizer.com, It is now being called: “Representational Qualia Theory” (see: https://canonizer.com/topic/88-Representational-Qualia/6 ). If experimentalist are able to verify this information and connect the subjective to the objective, it will enable us to falsify all (but the one) of the competing sub camps predicting the nature of qualia. We’ll be able to discover what it is, that has a redness quality. This will enable us to objectively eff the ineffable nature of qualia with objectively justified statements like: “My redness is like your greenness.”

Leave a Reply to Conrad DiDiodato Cancel reply

Your email address will not be published. Required fields are marked *