Search This Blog

Monday, April 22, 2024

Bomb Disposal Revisited

In a 2007 post entitled "Meaning and Transmission", I (slightly) explored an example of good-faith falsehood. In a draft of a paper on Business Ethics that I put together a while later, I found a better development of the example:

-----

A claim, or a belief, that a conversational move is honest cannot be sustained if it is also recognised by interlocutors that the move has consequences which are inconsistent with the possibility of the conversation.  This follows from the characterisation given elsewhere in this blog – the method of recursion cannot produce inconsistent results; the language game must remain playable.  This has the consequence that any honest move must be consistent with the indefinite extension of the conversation (in principle) on the basis of any general rules, or attributed intentional states, implied by the move.

In order to explore this, I am going to outline a strong apparent counter-example, and then explain why it fails.  In the course of this, some more detailed aspects of the approach will become clear.

Imagine that A is a bomb-disposal expert, and that she is advising B how to disarm a bomb.  A is safe from the effects of making a wrong decision, but B will be blown up if he cuts the wrong wire.  B, with two wires left to cut – a red one and a blue one – asks A which one to cut first.  Both know that the wrong choice will be fatal to B.  A knows that the red wire should be cut first.

Now add the following circumstance:  B completely believes that A wants him dead, and wishes to exploit the current situation to that effect.  A knows that B believes this, but B does not know that A knows this.  A does not want B dead.

The obvious consequence of this is that if A is to save B’s life, she must tell him to cut the wrong wire – because whichever wire she tells him to cut, he will cut the other.

The interesting issues here are:

1. A, clearly, should not tell B the truth about which wire to cut first, if she wants to save his life..

2. We have no grounds for thinking that killing B is a good thing to do.  At least for A and for B, it is not a good thing to do.

3. By telling B to cut the blue wire, A confirms B’s suspicion that she is trying to kill him, and – presumably – does substantial damage to the possibility that B might regard A as an honest interlocutor.

In the absence of complicated and unlikely presuppositions, I think we would want to regard A’s participation in this conversation (‘Cut the blue wire’) as honest engagement.  However, it is both untruthful and fatal to the conversation (though not, happily, to B).

Why is this not a counter-example?

The answer is that it only works as a counter-example if something like the interpretation we have given to the circumstances is correct.  However, the grounds we have for thinking that it is correct are very peculiar.  If we try to imagine this situation without the explanatory gloss given to A’s intentional state – her knowledge, desires, and beliefs – B’s interpretation of the situation would have seemed the most likely.  In this circumstance, we have no behavioural, or indeed any public, evidence that A is trying to save B’s life – indeed, quite the contrary.

We can only come to know unambiguously what A’s intentional state is through having a conversation with A, with the presumption of honest (and now truthful) engagement that this requires.  We can be misled by the authorial access of a narrator here – it is only because we (improbably, and without explanation) ‘know the whole story’ that the counter-example seems to make sense.  Once we have spoken to A, it is no longer a counter-example, but a perfectly intelligible (and true) account given in A’s conversation with us.  In order to complete this, we would surmise that if B knew the ‘whole story’, and was an interlocutor in the shared conversation, B would share our interpretation – or if there was conflict, it would have to be resolved for the conversation to continue.

It can be difficult to untangle the various complications of self-reference, authorial access, behavioural implication, and conversational competence that this kind of example illustrates.  It’s worth setting some of these out in a summary:

4. Authorial access:  I have produced this counter-example in a conversation with you (the reader), and provided no account of how I came to know A’s intentions.  If I failed to provide this account when queried, the quality of my participation in this conversation would be in doubt.  Given the circumstances, a reliable account would need to refer to a conversation I had with A that was contiguous with the one I’m having with you.  Any other evidence would have to include this, at least indirectly – e.g. if a third party told me of A’s intentions, based on A’s account to them.

5. Behaviour:  An interpretation of A’s behaviour, in the absence of some conversation with A, would be unlikely to include the possibility that it was well intentioned.  It could not exclude it either, but there are great many things it could not exclude (the Kripke/Goodman paradox, again).

6. Competence and honesty:  If, in some conversation I had with A, A had lied to me about her intentions (e.g. to cover up her attempt to kill B), then A would not have been an honest participant in that conversation, and so its status as a conversation would be degraded.  This either (a) could be discovered through further conversational experiment or (b) is not an intelligible possibility.

7. Self-reference:  While we might think we can intelligibly theorise about a circumstance where honest participation necessarily produced an ultimate conversational disaster, we can only do this by presuming that, at least for us, this disaster has not occurred.  The tragedy of A and B can only be narrated if someone can narrate; its details (which include their intentional states) are only credible if they have participated in some part of the extended conversation in which this narration takes place.