Search This Blog

Tuesday, September 08, 2009

Meanings and Rules

If we're going to say something, we need to have a way of working out what it means (well enough to play the move).

Any possible algorithms for computing the meanings of self-referential paradoxical statements don't terminate - there is no way of working out what they mean - so we can never play them as moves in the game. This is guaranteed by the way they are constructed - and they can only arise in this way. "This statement is false" can never intelligibly be asserted; and as it is the possibility of its intelligible assertion which would generate chaos, we are safe from it.

Tractable algorithms are a subset of the algorithms which can be finitely described. Some algorithms which can be finitely described are intractable. They loop; or do not produce a usable expression in a finite period of time (e.g. if required to calculate the exact digital expansion of a transcendental number); or they do not produce a usable expression in a relevant period of time (e.g. soon enough for us to use it).

When we describe an algorithm in words, of course, we must have a tractable method of working out what the words mean in time to be able to use them in our description of the algorithm. This method, in its turn ...

So, of course, we have another paradox. Or another perspective on a familiar paradox.

Can we say that if we follow a method for working out what to say, then it is a method that we cannot describe? In one sense, this is obviously true: it's a practical fact that we can't explain how we speak. A computer which mimics speech cannot also necessarily produce a verbal explanation of how it does this. But is it necessarily the case that it is true?

If the computer that mimicked speech was allowed to read and decompile the code that it was running, it could state the algorithms which it was following in the language in which they were written. It might also store in its memory a plan of its own hardware, a description of the compiler etc.

This might not count as an explanation, however. It would be something similar to what a neurologist interested in speech processing would do - attempting to produce an account of the hardware and software underlying the production of speech as a phenomenon.

Would the computer (or the neurologist) be able to say why what was being said was correct? Could it produce an argument for the reliability of its statements? Presumably, since it could speak (and since this is part of having the capacity to speak), it could produce arguments for the truth of individual statements that it made. But could it produce an argument for it, itself, being a reliable implementation of a speech algorithm? Could it produce an argument that it could, in general, speak properly - that it could, in general, tell the truth?

It can't do this by examining and reproducing its own code - this would only be an account of what it did, not of why it was correct.

If we read a short piece of computer code, we might see that is was correct - the code, for instance, for doing a binary search of a sorted list is almost self-explanatory when written out in a high level language. Even here, though, there is a lot of debate about whether this 'self-explanatoriness' amounts to a proof; and the debate (I think?) virtually comes down to how we select our formalism for calculating proofs. The computer code is also a formalism of a kind, and if it is well-defined, it should produce its own proofs. Programmes written in an formally correct implementation of Lisp, for instance, will look like expressions in the lambda calculus - and if the interpreter/compiler correctly evaluates these (another question, of course ...) the results will be provably correct in that calculus.

But all we have done, even here, is to show that an expression is a theorem of a formal axiomatic system which can be encoded in a computer programme which should (barring hardware faults) correctly implement the rules of the formalism. We can show that the speaking computer is designed to correctly follow the rules that the programmer has coded into it - we cannot show that these are the correct rules unless we allow that to speak correctly is simply to utter the theorems of some FAS or another ...

And since this is something some people might once have believed, its worth saying why it can't be true: for Kripkean reasons, the 'rules' of the system can only be made intelligible within a language which already works. The idea of a 'non-linguistic' rule is incoherent. We might think of 'laws of science' here, but these are part of our descriptive grammar - a world of which it was impossible to speak would have no laws. Our reason for thinking that there are rules 'in the world' is that it is only possible to describe a world of that kind, and we know (a priori) that we can desribe the world in some way. (When we say we cannot we are saying something about the world; and if we try to say this seriously we find we can't even say that we are saying something.)

We can predict, with high reliability, which rules the computer will follow - but we cannot define the rule as 'what the computer does', however much we might, in normal circumstances, practically rely on the computer to do the right thing. Such a definition would make a computer error formally unintelligible.

If we cannot give an account of rule following and formalism without, first of all, being able to give an account of something, then our capacity to give an account cannot be accounted for in terms of our following some specific set of rules. If it could, then we would have to say that it was not possible for the underlying substrate - the hardware, if you like - to make a mistake here. We would be like perfect computers. And we couldn't even test this perfection, because there would be no other standard: There would be no intelligible enquiry that we could undertake, because the very possibility of intelligible enquiry would depend on the reliabiltiy of the hardware.

It is perfectly possible - in fact psychologists have demonstrated that this is true - that the underlying hardware is systematically faulty. People are prone to making certain kinds of cognitive errors: they make incorrect inferences, the are not good at estimating risk etc. What is the standard of correctness that the psychologists are using here? Why do we not think that this standard, as well, could just be the product of some faulty hardware?

We think it because there is an argument for the reliability of this standard - an argument which, if fully developed, would show that the standard depended on the possibility of their being any standards; of our being able to say anything true at all; of our being able to speak. Even if we never fully develop this argument, we promise it when we make a truth claim - and if we find the argument cannot be made, we withdraw the truth claim (on pain of becoming unintelligible otherwise).

It is our ability to speak that is the standard against which the rules are judged, not the rules which guarantee the reliability of what we say. We have been confused by the fact that good rules generate other good rules; so that we think this must be where all rules come from.

And also that allowing the assertion of contradiction or paradox allows the assertion of anything at all, and so of nothing. Our rules must be consistent; and we must only say things which mean something. We might also say that the rules implied by an intelligible discourse are consistent, and that it isn't possible to (properly) say something which is meaningless. If we discover that a previous statement commits us to contradiction or paradox, we reinterpret, revise, or withdraw it.

So could there be 'hidden' programming? If we can make any sense of the idea, it is only as defining a system that we can explore 'from the inside', and never hope to give a full account of.

And it would have to include everything in the world - not just us. It is the whole programme by which the world runs, and there is no machine that it runs on because that machine would, itself, have to be part of the world. It is not even a thing that we can describe, because the description would have to include our description. And a description would have to explain its own veracity - it would have to explain our capacity for truth-telling, and so would generate open questions about its own truth. Or it would have to be in a language in which an account of our truth-telling could be given; and which would (therefore) be unintelligible to us in that respect. All of these are pointers to the void - there is nothing sensible we can say here.

This is why we must both explore and calculate; and explore how to calculate, and calculate how to explore ...

No comments: