We each have an ‘internal world' - a world of something like 'raw phenomenological states' - which seems to inform our shared public judgements. Or so I think, partly because many people seem to share my public judgement about this (!) More convincingly, I find that when I try to talk to someone about this internal world, there seems to be a good deal of congruence between aspects of my internal world and aspects of theirs.
As this kind of conversation between us progresses, however, two things become clear:
(1) That we cannot distinguish between (a) sharing our perceptions of our phenomenological state and (b) agreeing about how to talk about our phenomenological states.
(2) That there are some irreducible differences of perspective - I always see the world from within my 'sensorium', and my interlocutor always sees it from within theirs. I cannot see through their eyes, and they cannot see through mine. As Thomas Nagel has famously observed, it isn't even clear what my seeing their world as they do might mean.
As this kind of conversation between us progresses, however, two things become clear:
(1) That we cannot distinguish between (a) sharing our perceptions of our phenomenological state and (b) agreeing about how to talk about our phenomenological states.
(2) That there are some irreducible differences of perspective - I always see the world from within my 'sensorium', and my interlocutor always sees it from within theirs. I cannot see through their eyes, and they cannot see through mine. As Thomas Nagel has famously observed, it isn't even clear what my seeing their world as they do might mean.
And these two are not unrelated: The 'private perspective' issue seems to open a crack that allows a kind of linguistic scepticism to become intelligible: how do I know that you and I are not systematically confused when we describe, to each other, our phenomenological conditions? How do we know that our apparent agreement is not a product of this confusion?
This particular scepticism cannot be addressed recursively, because it does not threaten our general confidence that we can be intelligible to one another. We could be quite mistaken about phenomenological congruence, but still make sense otherwise. Somewhat bizarrely, for epistemological empiricists, agreement about how to talk about our phenomenological states is all that is required - it really doesn't matter whether this agreement is 'grounded' on some 'real' similarity.
(There are, of course, other problems with rendering an 'internal' scepticism about language intelligible - we need to imagine someone who understands how human language works, but 'sees', in some way, that human beings are systematically deceived by it. Their conclusions obviously cannot be represented in any human language, or in any language that can be translated into a human language - and so we find ourselves in Davidson and Quine's 'radical translation' maze. How can we establish that an 'untranslatable language' is a language?
If we cannot represent a belief in any conceivable language, how do we know what the belief is?
How can I possibly know what kind of sense I am making when I say 'George believes that language does not work'?)
The ‘internal awareness’ features of consciousness are, in so far as they are referred to in our shared language, ‘beetle in a box’ concepts. The grammar of the concept does not require that we all have the same experiences, in some independently verifiable way – it only requires that we have experiences which allow us to learn the shared grammar. In fact, issues of ‘sameness’ do not really arise here, as no independent public judgements about private sameness are possible. Even the question whether I have the ‘same’ private experience on different occasions is difficult to separate from my public engagement with the relevant grammar, and runs into ‘private language’ problems that cannot be resolved.
Since some time in the 17th century, however, we seem to have been convinced that the 'internal world' somehow supports and validates what we say about the public world. If I tell you that the kettle is boiling, then this is because some features and mechanisms of my internal space have led me to be convinced that this is the case, and that I can be sure I am not mistaken. It is from this period that we began to think that even the meaning of what I say about the kettle is somehow tied up with this hypothesised inner theatre.
I suppose it should be obvious to any reader of this blog that I don't think anything remotely like this is true. I am, however, interested in why this kind of picture is appealing, and what (apparently unconnected) errors it might lead us into. Perhaps I am also interested in discovering other ways to think here.
Among the errors I include a certain kind of metaphysical conviction which is impossible to coherently articulate: that there is a substrate to the way we speak that is, somehow, represented in our speech; that warrants it, that makes it 'true'. And that there are respects in which this substrate 'looks' like what we see when we look out through the windows of our minds onto the landscape of the world. This picture, I think, underlies most 'realisms'.
It is incoherent, because, of course, anything we say about it must also be a representation - e.g. of the relationship between what we say and the substrate - and so would require another substrate. Unlike mathematical correspondence, which is an articulated relationship between pairs of objects which are represented equally; aletheial correspondence is between an object which is a representation and one which is not; between a description, a name, or a theory and the 'thing' which is described, named, or hypothesised about, but which is only philosophically accessible through that same describing, naming or hypothesising.
But what is the source of this model? Of the 'metaphysical dissatisfaction' which Barry Stroud finds so uncomfortable - and which I sense, but really want to give a very different kind of account of?
One very simple confusion has contributed to it, and that is between discovery and justification: failing to distinguish between how we become convinced of the truth of a statement, and how we justify it. Seeing may be believing with respect to the same subject, but my seeing is not your believing. If we both 'see something at the same time', justification may not be required, but these occasions are uncommon. And although justification may not be required, it turns out, notoriously, to be difficult to provide when it is. The request for justification immediately raises a doubt about whether we did 'see the same thing'. What starts out as a case of simple shared experience turns out to be quite difficult to give an account of.
I suppose that, for each of us, our sensoria are so viscerally part of our experience of our own cognitive processing that it is difficult to imagine that, from the point of view of shared theorising, they are epistemologically irrelevant. The feeling of certainty associated with a direct experience does not play a role in how we demonstrate the truth of what we say about it, unless that role is agreed with our interlocutors. It cannot be shown to have a necessary role unless no agreement is possible without it - and it is easy to show that this position is either false or question begging. I have no way of testing your account (to me) of your internal states that does not depend on my trusting your accounts of this kind...
In the film 'Bladerunner', a central plot device is Rick's use of psychometric testing to distinguish 'replicants' from 'real' human beings. The romantic plot begins when he tests Rachel - a replicant who, unlike the others, is not trying to deceive him. She completely believes (should I say 'believes'?) in her own humanity, but fails his tests.
Under duress, we could all find ourselves in Rachel's predicament - in fact, it is easy to see her in any person emotionally abused to the point where their cognitive capacities are undermined. What is very plain about people in these circumstances is that - so far as can be reasonably determined - their internal world is scrambled by their social circumstances. There is no protected reservoir of sense and meaning that they can retreat to.
This is one problem with (slightly heavy handed) 'swamp man' and 'zombie' examples - from any socially realistic point of view, a creature judged to be of this kind would be driven quickly mad. Once again, science fiction shows more insight than philosophy ...
These examples are generally set up as problems for a certain kind of 'physicalist' or 'mechanist' metaphysics, but - of course - they are problems for any 'substance' based substrate. Or for any non-recursive substrate, for that matter. If we need 'consciousness' to give an account of sense or intelligibility, then we have no way of giving a sensible or intelligible account of consciousness.
I must be really clear here: I am not arguing that consciousness is some kind of illusion, or 'epiphenomenon', or must be given an account of in some other kind of spooky way. There are two anchors I think we can rely on: (1) believing that we are conscious is not irrational (2) we do not need to give (and indeed cannot give) a reductionist account of it (because any such account will generate open question paradoxes).
(I see, and accept, that these have the consequence that not everything we rationally believe in is tractable to reductive scientific investigation. What others might not see is that they also have the consequence that introducing some kind of 'non-scientific rationality' - e.g. an irreducible supernaturalism - will be a blind alley. No substrate works here: not physicalism, not information theory, not mathematics, not theology, not sociology of knowledge. No substrate whose validity can intelligibly be questioned will work. My apologies to all those theorists of consciousness who work in these fields - many of them are doing interesting and useful work, but it is not about consciousness.)
The central reason why believing in consciousness is not irrational is that we must attribute it to an interlocutor as a condition of attributing intentional states. It is the attribution of intentional states which distinguishes an interlocutor from some other device which we might engage with using language-like processes. I do not believe that I talk to my phone, or that a tape recorder can speak, exactly because I do not attribute intentional states to these devices.
I cannot say 'X says it is raining', claim that X is an honest and competent interlocutor, and also say that X does not believe that it is raining, or that X does not know that X believes this. (Or, worse still, say that it is not possible for X to have intentional states at all). Saying 'It is raining' cannot be reduced to producing a string of characters, or making certain noises, or engaging with a signalling process. (I can, of course, wonder whether X is an honest and competent interlocutor - but I cannot resolve this by coming to some explicit agreement with X about it.)
The language of intentional states also seems to be useful when describing creatures without language when we want to attribute actions to them (as opposed to behaviours). Freud, notoriously, introduced the the possibility of 'unconscious' intentional states to explain the actions of his patients, with a similar motivation. In both of these cases, the attribution of intentional states is essential to the theory, but potentially dispensable (if some other theory were available). To regard someone as an interlocutor is, necessarily, to attribute conscious intentional states to them.
But what about Stroud's metaphysical dissatisfaction? And mine, for that matter ...
I might tritely say that we all have our private miseries, but this would not be fair. After all, these miseries have, explicitly or implicitly, influenced our theoretical discourses. I might even, I think, find someone who did not have these anxieties slightly disturbing - even deluded. (Zombie-like?)
Giving them a general characterisation - 'existential loneliness', for example - doesn't seem to help much, either.
I think it goes something like this: it is very hard for me to comfortably believe that I understand your intentional states, if I must also - however abstractly - recognise that your phenomenological world, your sensorium, may be radically - Nagellianly - different from mine. Somehow, accepting each other as honest and competent interlocutors seems to carry this baggage.
And I suppose the way I address this - basically emotional - issue is by keeping the question abstract, while giving you a nudge and saying 'can you see what I mean here?'
I suppose it should be obvious to any reader of this blog that I don't think anything remotely like this is true. I am, however, interested in why this kind of picture is appealing, and what (apparently unconnected) errors it might lead us into. Perhaps I am also interested in discovering other ways to think here.
Among the errors I include a certain kind of metaphysical conviction which is impossible to coherently articulate: that there is a substrate to the way we speak that is, somehow, represented in our speech; that warrants it, that makes it 'true'. And that there are respects in which this substrate 'looks' like what we see when we look out through the windows of our minds onto the landscape of the world. This picture, I think, underlies most 'realisms'.
It is incoherent, because, of course, anything we say about it must also be a representation - e.g. of the relationship between what we say and the substrate - and so would require another substrate. Unlike mathematical correspondence, which is an articulated relationship between pairs of objects which are represented equally; aletheial correspondence is between an object which is a representation and one which is not; between a description, a name, or a theory and the 'thing' which is described, named, or hypothesised about, but which is only philosophically accessible through that same describing, naming or hypothesising.
But what is the source of this model? Of the 'metaphysical dissatisfaction' which Barry Stroud finds so uncomfortable - and which I sense, but really want to give a very different kind of account of?
One very simple confusion has contributed to it, and that is between discovery and justification: failing to distinguish between how we become convinced of the truth of a statement, and how we justify it. Seeing may be believing with respect to the same subject, but my seeing is not your believing. If we both 'see something at the same time', justification may not be required, but these occasions are uncommon. And although justification may not be required, it turns out, notoriously, to be difficult to provide when it is. The request for justification immediately raises a doubt about whether we did 'see the same thing'. What starts out as a case of simple shared experience turns out to be quite difficult to give an account of.
I suppose that, for each of us, our sensoria are so viscerally part of our experience of our own cognitive processing that it is difficult to imagine that, from the point of view of shared theorising, they are epistemologically irrelevant. The feeling of certainty associated with a direct experience does not play a role in how we demonstrate the truth of what we say about it, unless that role is agreed with our interlocutors. It cannot be shown to have a necessary role unless no agreement is possible without it - and it is easy to show that this position is either false or question begging. I have no way of testing your account (to me) of your internal states that does not depend on my trusting your accounts of this kind...
In the film 'Bladerunner', a central plot device is Rick's use of psychometric testing to distinguish 'replicants' from 'real' human beings. The romantic plot begins when he tests Rachel - a replicant who, unlike the others, is not trying to deceive him. She completely believes (should I say 'believes'?) in her own humanity, but fails his tests.
Under duress, we could all find ourselves in Rachel's predicament - in fact, it is easy to see her in any person emotionally abused to the point where their cognitive capacities are undermined. What is very plain about people in these circumstances is that - so far as can be reasonably determined - their internal world is scrambled by their social circumstances. There is no protected reservoir of sense and meaning that they can retreat to.
This is one problem with (slightly heavy handed) 'swamp man' and 'zombie' examples - from any socially realistic point of view, a creature judged to be of this kind would be driven quickly mad. Once again, science fiction shows more insight than philosophy ...
These examples are generally set up as problems for a certain kind of 'physicalist' or 'mechanist' metaphysics, but - of course - they are problems for any 'substance' based substrate. Or for any non-recursive substrate, for that matter. If we need 'consciousness' to give an account of sense or intelligibility, then we have no way of giving a sensible or intelligible account of consciousness.
I must be really clear here: I am not arguing that consciousness is some kind of illusion, or 'epiphenomenon', or must be given an account of in some other kind of spooky way. There are two anchors I think we can rely on: (1) believing that we are conscious is not irrational (2) we do not need to give (and indeed cannot give) a reductionist account of it (because any such account will generate open question paradoxes).
(I see, and accept, that these have the consequence that not everything we rationally believe in is tractable to reductive scientific investigation. What others might not see is that they also have the consequence that introducing some kind of 'non-scientific rationality' - e.g. an irreducible supernaturalism - will be a blind alley. No substrate works here: not physicalism, not information theory, not mathematics, not theology, not sociology of knowledge. No substrate whose validity can intelligibly be questioned will work. My apologies to all those theorists of consciousness who work in these fields - many of them are doing interesting and useful work, but it is not about consciousness.)
The central reason why believing in consciousness is not irrational is that we must attribute it to an interlocutor as a condition of attributing intentional states. It is the attribution of intentional states which distinguishes an interlocutor from some other device which we might engage with using language-like processes. I do not believe that I talk to my phone, or that a tape recorder can speak, exactly because I do not attribute intentional states to these devices.
I cannot say 'X says it is raining', claim that X is an honest and competent interlocutor, and also say that X does not believe that it is raining, or that X does not know that X believes this. (Or, worse still, say that it is not possible for X to have intentional states at all). Saying 'It is raining' cannot be reduced to producing a string of characters, or making certain noises, or engaging with a signalling process. (I can, of course, wonder whether X is an honest and competent interlocutor - but I cannot resolve this by coming to some explicit agreement with X about it.)
The language of intentional states also seems to be useful when describing creatures without language when we want to attribute actions to them (as opposed to behaviours). Freud, notoriously, introduced the the possibility of 'unconscious' intentional states to explain the actions of his patients, with a similar motivation. In both of these cases, the attribution of intentional states is essential to the theory, but potentially dispensable (if some other theory were available). To regard someone as an interlocutor is, necessarily, to attribute conscious intentional states to them.
But what about Stroud's metaphysical dissatisfaction? And mine, for that matter ...
I might tritely say that we all have our private miseries, but this would not be fair. After all, these miseries have, explicitly or implicitly, influenced our theoretical discourses. I might even, I think, find someone who did not have these anxieties slightly disturbing - even deluded. (Zombie-like?)
Giving them a general characterisation - 'existential loneliness', for example - doesn't seem to help much, either.
I think it goes something like this: it is very hard for me to comfortably believe that I understand your intentional states, if I must also - however abstractly - recognise that your phenomenological world, your sensorium, may be radically - Nagellianly - different from mine. Somehow, accepting each other as honest and competent interlocutors seems to carry this baggage.
And I suppose the way I address this - basically emotional - issue is by keeping the question abstract, while giving you a nudge and saying 'can you see what I mean here?'
No comments:
Post a Comment