Search This Blog

Thursday, November 06, 2025

Consciousness, Box Beetles, and Zombies

The 'problem of consciousness' hardly needs introducing, but I want to outline some features of it that might help to clear it up.

(1) Phenomenology: I see the world in my 'sensorium' and you see it in yours. We can't directly compare these, but we can't dismiss them either. It is absurd to say that they are 'illusory' (Dennett) because our illusions occur within them.

They feel epistemologically relevant, but it is impossible to show how, or that, this is the case. They can 'ground' conviction, but not demonstration. My seeing is not your believing.

(2) Intentionality: If I regard you as an honest and competent interlocutor, I must attribute the capacity for intentional states to you. If you make a serious assertion, I must believe that you believe that it is true. This is why Moore's 'paradox' feels paradoxical...

I'm not sure how I do this without also attributing a vehicle for your beliefs to you - an internal world, a sensorium, a phenomenological space, a capacity for conscious cognition. 'Consciousness' is a word we use to refer to the capacity to be in an intentional state, and the (essentially private) experiences that accompany this.

(3) There is no kind of 'theory' - scientific, metaphysical, theological, philosophical - that can account for (1) and (2), although there are many theories that can illuminate the puzzles that they generate.

(4) A corollary of (3)  is that there is no scientific, metaphysical, theological or philosophical 'solution' to the Zombie Problem. There is nothing in the world of 'theory' that can, if you like, distinguish between 'machines with phenomenological sensoria' and 'machines without phenomenological sensoria'.

However, we do, as a matter of fact, make this distinction - sometimes badly, often contentiously, frequently surreptitiously ...

---

Consciousness, then, turns out to be an essential cognitive category - but one we cannot give a cognitively viable account of. This should sound familiar. The 'Problem of Consciousness' looks like another projection of the open question paradox, where a cognitively essential normative category includes within its scope any account of how it can be applied.

But where is the 'normativity'? And why 'essential'?

The second question is the easiest - and the answer is pretty much provided by (2) above. If I can talk to you, I must attribute consciousness to you as essential to the capacity to have intentional states.

This kind of answer suggests that consciousness - along with the 'reliability' of mathematics, and our capacity to theorise about the world - is partly a grammatical category (in Wittgenstein's sense). It is a ground of intelligibility which can only be understood recursively. We can't demonstrate it, but we can reveal the incongruity of trying to make sense without it. (This is what Moore's paradox and its isomorphs do.)

But what about epistemology and phenomenology? Where does my own visceral sensorium (and yours ...) fit into this?

The potentially disturbing answer to this is a generalisation of Wittgenstein's 'box beetle' objection to a sensory semantics. Just as our private experiences of the applicability of a particular word need not be 'commensurable' in order for the word to make public sense, so our private experiences of what it is like to be able to speak in general need not inhabit the universe that we describe to one another in our conversations. Indeed, we would not experience them as 'private' if they did.

And for the same reason that we cannot render meaning in terms of private references, we cannot deliver 'sensory' epistemological fundamentals. Every attempt we make to do this will degenerate into public incoherence.

We must attribute consciousness to serious interlocutors. We can only attempt to give an account of consciousness within a conversation with a serious interlocutor (between you and me, for instance). An account of consciousness must entail a test of consciousness which can be articulated within the conversation (if it is to be meaningful). If I apply this 'test' to you, and you 'fail' it, we are no longer having a conversation - and so the essential vehicle for the account, and for the test, has evaporated.

The only 'test' of consciousness that can be applied is 'am I willing to have a conversation with you?' I clearly can't ask you this question if I am not. If I am, then we cannot articulate a useful theory of consciousness within the conversation without presupposing that we would both 'pass' it. Without such a test, the theory is vacuous.

And (by the way) if you and I articulate a theory of consciousness that we might apply to others - to those we do not regard as honest and competent interlocutors - we can only use non-semantic criteria. We might point to behaviour, perhaps, or to other observations of them that we agree between ourselves. We can't ask them. We must, in other words, draw conclusions about their beetles without even the initial inspiration of a shared word.

When I say my cat is hungry, I must be expressing a belief about what she would say if she could speak to me. No investigation of her that avoids this step can completely demonstrate whether I am right or wrong. (Kripke/Goodman paradox).

Sunday, August 17, 2025

First and Second Person

You and I are having a (somewhat one-sided) conversation. And I have just articulated the hypothesis that this is the case.

If it were not the case, then I wouldn't be able to do this.

It is a fact about the world that we are having this conversation.

So we have the following;

  • We can make unequivocal statements about the world. Synthetic a priori statements, if you like.
  • Semantic consistency is possible - 'conversation' is a generic term. Without other instances of conversations, it is hard to know what my claim that we are having one might mean.
Neither of these claims implies any epistemically prior enabling mechanisms. In fact, no such enabling mechanism can be specified which is better grounded than the claims themselves, since similar claims are presupposed by any account of an enabling mechanism that we might give.


Thursday, May 15, 2025

Living in Detail

Alfred North Whitehead is meant to have said: “We think in generalities, but we live in details.”

And a cheap jibe would be: "'Detail' is a general term." What kind of thing, after all, is a 'detail'?

I can say to you 'I unlocked my front door', and you believe that you know what I mean. You don't need to know the colour of my door, or the particular awkwardness of my lock, or even the street that I live in, in order to understand this statement. But I have all these things in my mind when I make it. I don't strip them away to the bare sense that you make of it.

You might even imagine your own front door, which I have never seen ...

If I said to you "Please unlock my front door", you would, of course, need more information - information that might look a bit like (but not be identical to) the colour of the door, the name of the street etc. that I have in my mind when I think about doing it.

If I'm giving you more 'details' I am bringing things into the 'generality' realm that weren't there before (so far as our conversation is concerned).

This possibility of exploration - bringing the 'details' into the 'generalities' - may look like it can be completed, but it obviously cannot. At the very least, it would take a very long time, and be very boring and pointless, to try to make a disciplined task of this. And even where novelists or poets pretend to this kind of project, they will still select and present - generally in a way that has, itself, semantic content.

Also, our direct experience is that something is always left behind, and something new is represented, in each telling. It feels like an 'open' process, not one that can be completed ...

The activity of conversation has, of course, a 'detail' aspect - it appears to be (but cannot be reduced to) a game of noises, postures, gestures, marks ... or the absence of these when they are 'expected'. Small particular neurological 'surprises' that capture our attention and prompt a prediction, a response.

This also could be mapped in increasing detail without producing a 'final' representation. Our semantic networks are also, necessarily, 'open'. This isn't to say that we 'don't know what we're saying' - it's just that we may discover implications after the fact, or even discover that we need to acknowledge an unexpected implication in order to remain intelligible to our interlocutors.

No machine will ever do this the way that a human being does. Human beings make machines - directly or conceptually - partly by artificially constraining or ignoring exactly the representational and semantic 'openness' that is unavoidable in real conversation; by pretending that it's only the things we can presently be sure about that really count; by imagining that the Kripke/Goodman paradox does not predict the generation of monsters by 'rule-following' machines.



Sunday, January 19, 2025

Semantic Consistency (2)

I made a mistake in my last post: 'from one occasion to another ...'.

Time, repetition, comparison, are also - in the sense I've been exploring - 'grammatical' categories and concepts. So when I say 'from one occasion to another' I'm  'projecting' these onto the world I'm describing.

I'm not saying here that time isn't 'real' - what I'm saying is that it can't be a ground of, or 'prior to',  intelligibility (or of 'semantic consistency') because any claims we make about the 'reality of time' already presuppose minimal semantic consistency.

What can we say about this? What roots can we find?

Once again, all we need here is for 'minimal semantic consistency is required for intelligibility' to be true in our present conversation. If it is, of course, we are committed to making a similar claim about other conversations in the 'past', 'future', and elsewhere ... claims about semantic consistency become, again, just claims that we can, in fact, talk to each other.

So instead of saying that the world must be a certain way in order to validate semantic intelligibility, we say that because we are intelligible to one another, the world must permit this. The epistemological dependence goes in the opposite direction to the 'empirical' dependence. The epistemic argument is a modus tollens, not a modus ponens.


Saturday, January 18, 2025

Semantic Consistency

To a minimal extent, we need to believe that words have the same meanings on different occasions of use. We play with counters, marks and clicks and imagine that we are abstracting - and so create abstractions of 'counters', 'clicks', 'marks', 'abstraction'...

Kripke/Goodman considerations tell us that this belief cannot be grounded in any conception of material or behavioural consistency, because these permit a kind of ambiguity which would be catastrophic for semantic content. In order to talk about the world (in other words ...) we employ devices with normative characteristics that nothing in the world they represent can instantiate.

Another reason why semantic consistency can't be grounded in any empirical substrate is because it is a pre-requisite to speculating about empirical substrates (or about anything ...).

Can the world enable semantic consistency without instantiating it? Or is the question: 'How can the world enable semantic consistency without instantiating it?'

This might be 'box beetle' and 'private language' territory. (Perhaps in a different guise.)

We might think, for instance, that the world only needs to make it seem to us that it enables semantic consistency (i.e. that it 'might not be real', or something like that?). A difficulty with this is that any speculation of this sort must, itself, depend on a presumption of semantic consistency. Even to speculate that some words may change their meanings, we need others to be consistent. I can't wonder whether my speculations about semantic consistency are actually about something else altogether - perhaps not even speculations.

(A sub-linguistic world of 'meanings' and 'propositions' doesn't really help here - this is definitely 'box beetle' territory.)