Search This Blog

Sunday, August 13, 2017

Counselling Conversations

No scientific theory, however 'abstract' or 'objective' makes sense independently of its being part of a conversation, and a conversation requires more than one participant. Our intuitions about the independence of theory is possibly a consequence of our phenomenological encounters with reality, and its intransigence in the face of our common desires for it to be other than it is. And perhaps also our habit of writing things down, so rendering them, apparently, and misleadingly, independent of any specific interlocutors.

It is because of this that an intense therapy session can feel like a productive seminar - new ways of talking are being explored in both. Also, in both circumstances, our private phenomenological spaces and our capacity to interact with one another seem to meet, or to engage most fluently. We acquire important insights and misleading metaphors here.

What has this to do with validity? (For which we always seem to seek an 'external' validation, of all things ...)

Well: Validity is grounded in the playability of the game, and 'reality' restricts the games that we will, or can, find playable. I'm in inclined to think that this is the only epistemological role that some conception of reality has (or needs to have). That it is, in Bob's words, 'parsable' - though not, obviously, in any completely determinate way.

So, in our intense conversations, we find three 'worlds' (as Karl Popper might have agreed, though about little else here ...) - the world as we talk about it, the private phenomenological world which we feel drives our talk and actions, and 'reality' - the invisible and intractable generator of barriers and trips that, sometimes unexpectedly, appear upon our field of play.

And all this is fine, and, as I have argued, not only supplies an intelligible epistemological groundwork, but is the only way such a groundwork can be made intelligible.

What else does it have to say, though?

There are normative aspects of this - not only with respect to the attitudes we must take to interlocutors (for them to be interlocutors) - e.g. that we regards them as, broadly (and in context), honest and competent. This is not an easy norm to render as a set of rules, of course, so some of what follows may be implied by it anyway. However, I'm going to articulate a few thoughts I have:

Rogerian counselling (much mocked by the 'Eliza' caricature) is based on attitudes of positive regard, empathy, and congruence. The 'counsellor' takes the 'client' completely seriously, tries to see their world as they see it, and exhibits a kind of holistic honesty of practice that is more fundamental than literal truth telling. It is the relationship between this counsellor and the client that does the therapeutic 'work', and this relationship requires the whole presence of the counsellor.

Rogers drew his conclusions from what he saw in effective therapy, but believed that the 'person-centred' attitudes suggested a powerful approach to social and political conflicts.

I have worked as a person centred counsellor, and would make the following observations:

(1)

A conversation in which one participant is the arbiter of sense quickly becomes unintelligible to all. This is because 'I am the arbiter of sense' can only make sense in its own terms, which are systematically opaque (we have to keep asking the arbiter how to make sense, to the point where even our questions do not make sense independent of the arbiter's adjudications ...).

In a person-centred counselling relationship, the counsellor is not an 'expert' - he or she does not diagnose, prescribe, or advise. (If there is an 'expert' in the room, it is the client - who knows much more about his or her own internal world than the counsellor does.) Instead, the counsellor is an interlocutor - a real person with whom the client can have a real conversation, which includes tacit negotiation of meanings and linguistic rules. Potential failures of intelligibility - if they are consequential for the relationship - are negotiated mutually. The participants explore new ways of talking.

(2)

Bertrand Russell is meant to have said that you should only criticise the strongest version of your opponent's position that you can construct. (There is some dispute about whether he followed his own advice, but sometimes it is hard to distinguish malice from cognitive failure.)

In a real conversation, we must engage with the motivations and emotional concerns of our interlocutors. This cannot be empty - condescension would be damaging to the relationship. We have to do this, because without some understanding here we do not know what they mean by the things that they say.

A good public example of what goes wrong here is the vacuous debate between fundamentalists on both sides of the 'god' debate. If each tried more seriously to understand what was going on for the other, there would be less stone-throwing and more discovery. (I am reminded of Desmond Tutu's remark that 'God is the potential for goodness that exists in humanity'.)

Even (!) among philosophers, it is clear that some positions (both positive and negative) have an appeal that is hard to justify in theoretical terms alone. Some empathic recognition of the emotional needs which drive these preferences might really help to clear the air.

(3)

The capacity for congruence is acquired through practice. It is acquired experientially, just as the capacity to speak is acquired experientially. We become honest and competent interlocutors through participating in conversations.

There are no external criteria of congruence, although incongruence is likely to reveal itself as a kind of 'bad faith'. That is, as an attempt to achieve some subterfuge - not always as clear cut as a lie, as it can often be, in a sense, 'unconscious'. Incongruence can arise from unprocessed conflicts in the counsellor (e.g. suppressed shame related to issues which the client wishes to explore). An important danger of incongruence in a counselling context is that it can lead to the counsellor responding to the client in a way that is not connected to the client's distress, but to the counsellor's.

In fact, of course, there cannot be any external criteria if congruence is closely related to honest competency, for exactly the same reasons that there can be no absolute criteria for successful translation (Davidson and Quine) or for rule-following (Kripke). Similarly, the possibility of congruence cannot be dispensed with without dispensing with the possibility of honest and competent interlocutorship (and so, unintelligibly, with the possibility of language).

---------------------------

There will be more on this. There is some relationship between the productivity of counselling conversations and the establishment of the semantic basis for formal reasoning that is worth exploring further.

Consciousness and Metaphysical Dissatisfaction

We each have an ‘internal world' - a world of something like 'raw phenomenological states' -  which seems to inform our shared public judgements. Or so I think, partly because many people seem to share my public judgement about this (!) More convincingly, I find that when I try to talk to someone about this internal world, there seems to be a good deal of congruence between aspects of my internal world and aspects of theirs.

As this kind of conversation between us progresses, however, two things become clear:

(1) That we cannot distinguish between (a) sharing our perceptions of our phenomenological state and (b) agreeing about how to talk about our phenomenological states.

(2) That there are some irreducible differences of perspective - I always see the world from within my 'sensorium', and my interlocutor always sees it from within theirs. I cannot see through their eyes, and they cannot see through mine. As Thomas Nagel has famously observed, it isn't even clear what my seeing their world as they do might mean.

And these two are not unrelated: The 'private perspective' issue seems to open a crack that allows a kind of linguistic scepticism to become intelligible: how do I know that you and I are not systematically confused when we describe, to each other, our phenomenological conditions? How do we know that our apparent agreement is not a product of this confusion?

This particular scepticism cannot be addressed recursively, because it does not threaten our general confidence that we can be intelligible to one another. We could be quite mistaken about phenomenological congruence, but still make sense otherwise. Somewhat bizarrely, for epistemological empiricists, agreement about how to talk about our phenomenological states is all that is required - it really doesn't matter whether this agreement is 'grounded' on some 'real' similarity.

(There are, of course, other problems with rendering an 'internal' scepticism about language intelligible - we need to imagine someone who understands how human language works, but 'sees', in some way, that human beings are systematically deceived by it. Their conclusions obviously cannot be represented in any human language, or in any language that can be translated into a human language - and so we find ourselves in Davidson and Quine's 'radical translation' maze. How can we establish that an 'untranslatable language' is a language?

If we cannot represent a belief in any conceivable language, how do we know what the belief is?

How can I possibly know what kind of sense I am making when I say 'George believes that language does not work'?)

The ‘internal awareness’ features of consciousness are, in so far as they are referred to in our shared language, ‘beetle in a box’ concepts. The grammar of the concept does not require that we all have the same experiences, in some independently verifiable way – it only requires that we have experiences which allow us to learn the shared grammar. In fact, issues of ‘sameness’ do not really arise here, as no independent public judgements about private sameness are possible. Even the question whether I have the ‘same’ private experience on different occasions is difficult to separate from my public engagement with the relevant grammar, and runs into ‘private language’ problems that cannot be resolved.

Consciousness fulfils a number of semi-explanatory roles, e.g. as a repository of self-awareness, and particularly of self-aware intentional states; as a collection of cognitive mechanisms revealed through certain kinds of intelligible behaviour, perhaps especially behaviour which seems to reveal an awareness of others’ 'cognitive states'; as the 'subject space' of perception; as a property of the 'soul substance'. And others. Not all mutually congruent…

Since some time in the 17th century, however, we seem to have been convinced that the 'internal world' somehow supports and validates what we say about the public world. If I tell you that the kettle is boiling, then this is because some features and mechanisms of my internal space have led me to be convinced that this is the case, and that I can be sure I am not mistaken. It is from this period that we began to think that even the meaning of what I say about the kettle is somehow tied up with this hypothesised inner theatre.

I suppose it should be obvious to any reader of this blog that I don't think anything remotely like this is true. I am, however, interested in why this kind of picture is appealing, and what (apparently unconnected) errors it might lead us into. Perhaps I am also interested in discovering other ways to think here.

Among the errors I include a certain kind of metaphysical conviction which is impossible to coherently articulate: that there is a substrate to the way we speak that is, somehow, represented in our speech; that warrants it, that makes it 'true'. And that there are respects in which this substrate 'looks' like what we see when we look out through the windows of our minds onto the landscape of the world. This picture, I think, underlies most 'realisms'.

It is incoherent, because, of course, anything we say about it must also be a representation - e.g. of the relationship between what we say and the substrate - and so would require another substrate. Unlike mathematical correspondence, which is an articulated relationship between pairs of objects which are represented equally; aletheial correspondence is between an object which is a representation and one which is not; between a description, a name, or a theory and the 'thing' which is described, named, or hypothesised about, but which is only philosophically accessible through that same describing, naming or hypothesising.

But what is the source of this model? Of the 'metaphysical dissatisfaction' which Barry Stroud finds so uncomfortable - and which I sense, but really want to give a very different kind of account of?

One very simple confusion has contributed to it, and that is between discovery and justification: failing to distinguish between how we become convinced of the truth of a statement, and how we justify it. Seeing may be believing with respect to the same subject, but my seeing is not your believing. If we both 'see something at the same time', justification may not be required, but these occasions are uncommon. And although justification may not be required, it turns out, notoriously, to be difficult to provide when it is. The request for justification immediately raises a doubt about whether we did 'see the same thing'. What starts out as a case of simple shared experience turns out to be quite difficult to give an account of.

I suppose that, for each of us, our sensoria are so viscerally part of our experience of our own cognitive processing that it is difficult to imagine that, from the point of view of shared theorising, they are epistemologically irrelevant. The feeling of certainty associated with a direct experience does not play a role in how we demonstrate the truth of what we say about it, unless that role is agreed with our interlocutors. It cannot be shown to have a necessary role unless no agreement is possible without it - and it is easy to show that this position is either false or question begging. I have no way of testing your account (to me) of your internal states that does not depend on my trusting your accounts of this kind...

In the film 'Bladerunner', a central plot device is Rick's use of psychometric testing to distinguish 'replicants' from 'real' human beings. The romantic plot begins when he tests Rachel - a replicant who, unlike the others, is not trying to deceive him. She completely believes (should I say 'believes'?) in her own humanity, but fails his tests.

Under duress, we could all find ourselves in Rachel's predicament - in fact, it is easy to see her in any person emotionally abused to the point where their cognitive capacities are undermined. What is very plain about people in these circumstances is that - so far as can be reasonably determined - their internal world is scrambled by their social circumstances. There is no protected reservoir of sense and meaning that they can retreat to.

This is one problem with (slightly heavy handed) 'swamp man' and 'zombie' examples - from any socially realistic point of view, a creature judged to be of this kind would be driven quickly mad. Once again, science fiction shows more insight than philosophy ...

These examples are generally set up as problems for a certain kind of 'physicalist' or 'mechanist' metaphysics, but - of course - they are problems for any 'substance' based substrate. Or for any non-recursive substrate, for that matter. If we need 'consciousness' to give an account of sense or intelligibility, then we have no way of giving a sensible or intelligible account of consciousness.

I must be really clear here: I am not arguing that consciousness is some kind of illusion, or 'epiphenomenon', or must be given an account of in some other kind of spooky way. There are two anchors I think we can rely on: (1) believing that we are conscious is not irrational (2) we do not need to give (and indeed cannot give) a reductionist account of it (because any such account will generate open question paradoxes).

(I see, and accept, that these have the consequence that not everything we rationally believe in is tractable to reductive scientific investigation. What others might not see is that they also have the consequence that introducing some kind of 'non-scientific rationality' - e.g. an irreducible supernaturalism - will be a blind alley. No substrate works here: not physicalism, not information theory, not mathematics, not theology, not sociology of knowledge. No substrate whose validity can intelligibly be questioned will work. My apologies to all those theorists of consciousness who work in these fields - many of them are doing interesting and useful work, but it is not about consciousness.)

The central reason why believing in consciousness is not irrational is that we must attribute it to an interlocutor as a condition of attributing intentional states. It is the attribution of intentional states which distinguishes an interlocutor from some other device which we might engage with using language-like processes. I do not believe that I talk to my phone, or that a tape recorder can speak, exactly because I do not attribute intentional states to these devices.

I cannot say 'X says it is raining', claim that X is an honest and competent interlocutor, and also say that X does not believe that it is raining, or that X does not know that X believes this. (Or, worse still, say that it is not possible for X to have intentional states at all). Saying 'It is raining' cannot be reduced to producing a string of characters, or making certain noises, or engaging with a signalling process. (I can, of course, wonder whether X is an honest and competent interlocutor - but I cannot resolve this by coming to some explicit agreement with X about it.)

The language of intentional states also seems to be useful when describing creatures without language when we want to attribute actions to them (as opposed to behaviours). Freud, notoriously, introduced the the possibility of  'unconscious' intentional states to explain the actions of his patients, with a similar motivation. In both of these cases, the attribution of intentional states is essential to the theory, but potentially dispensable (if some other theory were available). To regard someone as an interlocutor is, necessarily, to attribute conscious intentional states to them.

But what about Stroud's metaphysical dissatisfaction? And mine, for that matter ...

I might tritely say that we all have our private miseries, but this would not be fair. After all, these miseries have, explicitly or implicitly, influenced our theoretical discourses. I might even, I think, find someone who did not have these anxieties slightly disturbing - even deluded. (Zombie-like?)

Giving them a general characterisation - 'existential loneliness', for example - doesn't seem to help much, either.

I think it goes something like this: it is very hard for me to comfortably believe that I understand your intentional states, if I must also - however abstractly - recognise that your phenomenological world, your sensorium, may be radically - Nagellianly - different from mine. Somehow, accepting each other as honest and competent interlocutors seems to carry this baggage.

And I suppose the way I address this - basically emotional - issue is by keeping the question abstract, while giving you a nudge and saying 'can you see what I mean here?'

Saturday, August 12, 2017

A note from the sea ...

Our signals cannot decode themselves, and the result of linguistic decoding is not just a new signal, re-representing, re-encoding, some content or consequence of the original.

A 'theory of everything' would, somehow, have to decode itself. It would have to contain a theory of its own meaning. Perhaps a theory like this goes beyond what physics might aspire to, but I am not sure about this. Already we approach certain problems with attempts that few can understand...

(What good is it to me if the truth can only be expressed in Martian? Or perhaps what I write here is equally opaque ...)

When we contradict ourselves, we destroy meaning - but we do not guarantee meaningfulness just by avoiding contradiction. We cannot guarantee meaningfulness at all. We imagine that we 'talk' by exchanging these tokens, and find that even identifying and circumscribing the 'tokens' turns out to be a murky business. We think we can, with the tokens, write down 'rules' about how the tokens should be used. We have imagined that these rules can form an exhaustive account, or structure, or, perhaps machine ... but they cannot. As we tidy each room and photograph it, the lights go out - insight evaporates, rather than deepening.

We try new tokens: 'meaningfulness', 'intelligibility', 'philosophy' ... the pile gets higher, but we see no more from its summit than our own ability to build it, from when its content is derived.

Representation and Content

(S) "The statement S describes C"

This is a common construction, and it suggests a relationship between 'things' (events, objects) in the world and the assertions which represent them.

The statement that a list of such statements can be constructed must be a member of the list, since it is a statement about the way the world is - given the picture of 'description' that is suggested here, the possibility of such a list would be a fact about the world.

Also, and somewhat intractably, any statement about why this was the case would also have to be a member of the list. This obviously generates an OQ paradox, since membership of the list is a kind of truth test.

Finally, since any general account of representation could be used to generate such a list (to determine whether a statement was a member of it or not), any general account of representation generates an OQ paradox.

Taking representation - somewhat loosely, but, I suspect, validly - as a proxy for content, I think that any account of content must generate the same problem. This includes supernatural accounts (mind, emergence) etc.

Since we can't intelligibly deny that our statements have content (since such a denial would render itself unintelligible), we are left with only the possibility of a recursive account - an account which makes the nature of content, and the validity of attributing content, depend upon the possibility that some things we say (including our discussion of content) must have content.

This applies to our most 'basic' statements about the world - the kinds of simple statement of fact that a naïve materialist might wish to take as metaphysical fundamentals. Since we can only guarantee their content by a recursive account, they cannot be the kinds of fundamentals that are required.

And this also applies to basic statements about the tools of language.

If we think of these tools just in terms of strings of text and rules for manipulating them, we find that statements about the natures of the strings (what they contain) and the consequences of the rules can only be validated recursively. The illusion of simplicity here arises from the 'obviousness' of the basic presumptions - an obviousness which is rooted in, but not validated by, something like our 'language instinct'. Human beings just find some things more 'obvious' than others, and persist in trying to give this metaphysical or epistemological significance.

Imagining the world as mechanism - and intelligence as complex signalling and manipulation machinery - may be a useful heuristic, especially for those scientists engaged in enquiries which seek to produce models of this kind. But the usefulness of the heuristic, like the 'obviousness' of some 'atomistic' statements of fact or rule, is just epistemologically misleading.

Dennett World

There are machines which take large amounts of data (e.g. weather observations) and produce human-like summaries, such as the literal string [There will be a south-westerly breeze of about 15 knots in Fair Isle tomorrow].

Let's imagine such a machine. It is able to take inputs from various sources (e.g. cameras, microphones, the internet, etc.) and produce text strings. I'll restrict the output to the printable ASCII characters plus carriage return.

After a period of collecting data and running some internal processes, the machine produces the following string:

(A) [Bedford Road is a street which runs downhill from the top of Powis Terrace to a junction on St. Machar Drive very near the Zoology building.]

(I will put strings in square brackets, to distinguish them from sentences.)

The statement which an English speaker might presume the machine was making when it produced this string is (more or less) true.

There is clearly a causal chain of some kind from the concrete fact of Bedford Road and the output of the ASCII string.  It involves the material of the street and it's environs, the 'sensory' input from cameras etc., and the physical states of any other machines and objects with which our imagined machine has interacted.

(Remember that the production of an ASCII string can be rendered in entirely physical terms - e.g. states of bi-stable circuits, or of magnetic domains, or patterns of ink on paper etc.)

This machine is a creature in a Dennettian world. I will even allow it some 'emergence', in the sense that it may be far too complex a machine for any human being, or any group of human beings, to be able to follow its processes from input to output.

We might imagine that such a machine could be developed to the extent that it could answer any purely descriptive question about the world outside itself, such as 'Is it raining in Lima?'

There would be things it would never be able to do - such as produce comprehensive descriptions of all of its own processes of description. Turingesque limitations would prevent this. In particular, it could not produce the string:

(B) [The machine which produced this string can accurately describe the world]

as a result of its normal descriptive processes, such as the ones which produced the Bedford Road string.

This is because it would need to incorporate a reliable test of its own empirical accuracy in order to be able to do this. This would generate an open question paradox for the machine: either it is presently able to accurately describe the world, or it is not. However it was enhanced, the same question would arise.

Either the machine itself is the standard of empirical accuracy, or it has no general standard of empirical accuracy.

This means that there are 'facts' about the world which an English speaker would never see 'represented' in the strings this machine produced - no matter how long they waited, or how cunningly they questioned it. String (B), after all, represents such a fact.

This has a consequence for Dennett's picture of 'reality', which represents people as machines of the type I have described - only with a potentially wider range of inputs and outputs.

For Dennett's picture to work, all of the inputs and outputs would have to be potentially representable in text strings (since this is, in this picture, how we produce scientific and philosophical theories).