Search This Blog

Wednesday, July 07, 2010

Talking machines (revisited)

Being able to talk is not the same as, say, being able to produce some string of characters (like the one which comprises the coding of this post). We can see two instances of the same string of characters and say of one 'this was produced by an interlocutor, and is part of a conversation' and of another 'this was produced by a machine, and is a recording or representation (etc.)'. In principal, it wouldn't matter how long this string was.

It couldn't be the coding for all human conversation to date, however, because it wouldn't mean anything if it did.

If we say 'the whole of human conversation to date is a string produced by a machine', we would have to wonder what language we were saying this in. If we are including it within its own scope - as part of human conversation to date - then either (a) it is false or (b) it doesn't mean anything because it's just a string of code. If we are not including it within its own scope, we might wonder how it, uniquely, could mean something if everything that looked as though it has meant something up to that point was just a string of code. A language game cannot comprise a single sentence standing on its own.

Should we think of ourselves as being the ('deluded') components of some greater machine which we cannot 'comprehend'? This could only ever be a metaphor - we could not describe the machine we were 'part of' well enough to call it a machine. Those elements of its workings we did not understand we could not distinguish from random, or at least radically unpredictable. And there would always be elements we did not understand because we are part of the machine, and we can't model ourselves. The machine cannot produce a string which encodes a complete description of itself (including its capacity to produce this string).

Who would this description be for? What is the language within which this 'representation' would work? (i.e. would be a legitimate move...)

A machine can't have a private language either.

The halting problem is also in here somewhere ...

Suppose that we imagine the machine to be following some rule - we see that the strings it produces, when interpreted in a certain way, do not breach the no contradictions rule. This wouldn't allow us to hand our adjudications on the rule over to the machine - it might not always behave correctly; it might break down. The machine can follow the rule, but it can't define it - and neither can any other machine.

(Maybe this is 'why' the halting problem arises ...)

What do we do when we say 'this is right' or 'this is wrong'? We state a rule that we are going to follow - we specify a part of the theory of truth we are using. Our justifications of these statements are always 'incomplete', in the sense that they cannot look outside our language, in which they are expressed. Our ability to agree empirical hinges is only different from our ability to agree logical or mathematical hinges in so far as it is more 'mysterious' to us (epistemologically). They are all rules which, if broken, render the conversation impossible - including, in extremis, any conversation in which we might be able to say why this conversation had become impossible.

No comments: