The 'problem of consciousness' hardly needs introducing, but I want to outline some features of it that might help to clear it up.
(1) Phenomenology: I see the world in my 'sensorium' and you see it in yours. We can't directly compare these, but we can't dismiss them either. It is absurd to say that they are 'illusory' (Dennett) because our illusions occur within them.
They feel epistemologically relevant, but it is impossible to show how, or that, this is the case. They can 'ground' conviction, but not demonstration. My seeing is not your believing.
(2) Intentionality: If I regard you as an honest and competent interlocutor, I must attribute the capacity for intentional states to you. If you make a serious assertion, I must believe that you believe that it is true. This is why Moore's 'paradox' feels paradoxical...
I'm not sure how I do this without also attributing a vehicle for your beliefs to you - an internal world, a sensorium, a phenomenological space, a capacity for conscious cognition. 'Consciousness' is a word we use to refer to the capacity to be in an intentional state, and the (essentially private) experiences that accompany this.
(3) There is no kind of 'theory' - scientific, metaphysical, theological, philosophical - that can account for (1) and (2), although there are many theories that can illuminate the puzzles that they generate.
(4) A corollary of (3) is that there is no scientific, metaphysical, theological or philosophical 'solution' to the Zombie Problem. There is nothing in the world of 'theory' that can, if you like, distinguish between 'machines with phenomenological sensoria' and 'machines without phenomenological sensoria'.
However, we do, as a matter of fact, make this distinction - sometimes badly, often contentiously, frequently surreptitiously ...
---
Consciousness, then, turns out to be an essential cognitive category - but one we cannot give a cognitively viable account of. This should sound familiar. The 'Problem of Consciousness' looks like another projection of the open question paradox, where a cognitively essential normative category includes within its scope any account of how it can be applied.
But where is the 'normativity'? And why 'essential'?
The second question is the easiest - and the answer is pretty much provided by (2) above. If I can talk to you, I must attribute consciousness to you as essential to the capacity to have intentional states.
This kind of answer suggests that consciousness - along with the 'reliability' of mathematics, and our capacity to theorise about the world - is partly a grammatical category (in Wittgenstein's sense). It is a ground of intelligibility which can only be understood recursively. We can't demonstrate it, but we can reveal the incongruity of trying to make sense without it. (This is what Moore's paradox and its isomorphs do.)
But what about epistemology and phenomenology? Where does my own visceral sensorium (and yours ...) fit into this?
The potentially disturbing answer to this is a generalisation of Wittgenstein's 'box beetle' objection to a sensory semantics. Just as our private experiences of the applicability of a particular word need not be 'commensurable' in order for the word to make public sense, so our private experiences of what it is like to be able to speak in general need not inhabit the universe that we describe to one another in our conversations. Indeed, we would not experience them as 'private' if they did.
And for the same reason that we cannot render meaning in terms of private references, we cannot deliver 'sensory' epistemological fundamentals. Every attempt we make to do this will degenerate into public incoherence.
We must attribute consciousness to serious interlocutors. We can only attempt to give an account of consciousness within a conversation with a serious interlocutor (between you and me, for instance). An account of consciousness must entail a test of consciousness which can be articulated within the conversation (if it is to be meaningful). If I apply this 'test' to you, and you 'fail' it, we are no longer having a conversation - and so the essential vehicle for the account, and for the test, has evaporated.
The only 'test' of consciousness that can be applied is 'am I willing to have a conversation with you?' I clearly can't ask you this question if I am not. If I am, then we cannot articulate a useful theory of consciousness within the conversation without presupposing that we would both 'pass' it. Without such a test, the theory is vacuous.
And (by the way) if you and I articulate a theory of consciousness that we might apply to others - to those we do not regard as honest and competent interlocutors - we can only use non-semantic criteria. We might point to behaviour, perhaps, or to other observations of them that we agree between ourselves. We can't ask them. We must, in other words, draw conclusions about their beetles without even the initial inspiration of a shared word.
When I say my cat is hungry, I must be expressing a belief about what she would say if she could speak to me. No investigation of her that avoids this step can completely demonstrate whether I am right or wrong. (Kripke/Goodman paradox).
No comments:
Post a Comment