Maybe the framework must allow semantic paradoxes (if it is to allow reflections on validity); while the way we use this framework still avoids them. The method of avoidance cannot be fully articulated because this would, itself, generate an open question paradox.
And this will be the case, mutatis mutandis, whatever devices we might construct to deal with specific instances ...
It's easy to avoid the ones with indeterminable meanings (loops); but this can't be made into a rule? Maybe there are too many cases where the determinability of a meaning cannot be determined.
Saturday, September 12, 2009
Thursday, September 10, 2009
A 'constructed' (?) machine world
We can only talk about the world as, in some minimal sense, following rules. If we lived in a 'lawless' world, we wouldn't be able to say so.
(This doesn't give us an argument for the truth of any specific 'laws', though, except - perhaps - those that make language, minimally, possible).
It is a game of mirrors - we 'look' in the 'world' for laws, and, in so far as we can describe the world at all, we find the laws which make our descriptions possible. These laws, in their turn, can be written down as the code for a machine - we start to think of the world in terms of 'information' and 'algorithms', as though it could 'really be like that'. But this is just another metaphysical metaphor.
And we know this, because we know that no machine constructed in this way could answer certain intelligible questions - such as 'has this machine been properly constructed?' This is the very edge of the machine metaphor. The place where sense and nonsense walk side by side.
(This doesn't give us an argument for the truth of any specific 'laws', though, except - perhaps - those that make language, minimally, possible).
It is a game of mirrors - we 'look' in the 'world' for laws, and, in so far as we can describe the world at all, we find the laws which make our descriptions possible. These laws, in their turn, can be written down as the code for a machine - we start to think of the world in terms of 'information' and 'algorithms', as though it could 'really be like that'. But this is just another metaphysical metaphor.
And we know this, because we know that no machine constructed in this way could answer certain intelligible questions - such as 'has this machine been properly constructed?' This is the very edge of the machine metaphor. The place where sense and nonsense walk side by side.
Wednesday, September 09, 2009
The Truth Machine (3)
Except:
The machine - as metaphor or as formal system - only appears in our theories as a description. In particular, it is described as following rules. (Specified rules, if it is a formal system).
Logicians distinguish between a formal system and its interpretation - we can have 'truth tables', uninterpreted (ones and zeros, Ts and Fs); and we can 'interpret' these as showing the circumstances under which 'Truth' is transmitted from one statement to another.
A difficulty with this distinction is that there is already some interpretation in the formal system. For instance, we reject 'P&~P' as contravening the law of non-contradiction if we regard both instances of P as 'meaning' the same thing. Even more fundamentally: Is 'P' the same symbol as 'P'? Obviously yes? Why?
We recognise them as the same, but they are in different places on the page, and are surrounded by different patterns of other symbols; and there are probably other 'differences'. Without 'same' and 'different' we don't even have a symbolism, and we only have 'same' and 'different' under interpretation.
And we only have interpretation if we already have a semantic framework.
And so we can only give sense to 'state of the machine', as well, within a semantic framework.
The machine - as metaphor or as formal system - only appears in our theories as a description. In particular, it is described as following rules. (Specified rules, if it is a formal system).
Logicians distinguish between a formal system and its interpretation - we can have 'truth tables', uninterpreted (ones and zeros, Ts and Fs); and we can 'interpret' these as showing the circumstances under which 'Truth' is transmitted from one statement to another.
A difficulty with this distinction is that there is already some interpretation in the formal system. For instance, we reject 'P&~P' as contravening the law of non-contradiction if we regard both instances of P as 'meaning' the same thing. Even more fundamentally: Is 'P' the same symbol as 'P'? Obviously yes? Why?
We recognise them as the same, but they are in different places on the page, and are surrounded by different patterns of other symbols; and there are probably other 'differences'. Without 'same' and 'different' we don't even have a symbolism, and we only have 'same' and 'different' under interpretation.
And we only have interpretation if we already have a semantic framework.
And so we can only give sense to 'state of the machine', as well, within a semantic framework.
The Truth Machine (2)
The world would be a machine with no output, of course - it could only inscribe 'output' into its own internal states, which, in their turn, would change the overall state of the machine and the nature of future inscriptions.
And these future inscriptions could have, as a valid interpretation, that some earlier inscriptions were false. Inscribed/not inscribed would not be true/false.
It looks at first as though we could have shown to be false / not yet shown to be false; but the statement 'X was shown to be false' is just another such (revisable) inscription.
This is the problem with the metaphor: the machine can have no 'interpretation' - it is just a machine. It inscribes what it inscribes, and does not inscribe what it does not inscribe. A Turing machine only halts, it does not judge; and the 'world as truth machine' has no such terminal state.
Except, of course, in one sense: that its terminal point, in terms of our present converstation, is whenever now is. This is because it is incoherent to say (in this conversation) that everything we are saying may not be true. It is necessarily true, for this conversation, that something we are saying may be true. (No computation of the machine could be interpreted as revising this, because its contrary is not a valid interpretation of any sayable thing.)
Does this count?
Maybe not: interpretation only takes place within a playable language game - it is an irreducibly semantic 'process'. It can't be restated syntactically. In this conversation, I validly conclude that 'X' is true; but this doesn't give me any guarantees about what 'strings' or 'symbols' ('X', '~X', or others) may be produced by the machine in the future - it only gives me a conclusion about how these could be interpreted by my interlocutors and myself. If 'X&~X' appears in the output, it can only appear as false or reinterpretable; but neither of these represent definite uninterpreted machine states.
And these future inscriptions could have, as a valid interpretation, that some earlier inscriptions were false. Inscribed/not inscribed would not be true/false.
It looks at first as though we could have shown to be false / not yet shown to be false; but the statement 'X was shown to be false' is just another such (revisable) inscription.
This is the problem with the metaphor: the machine can have no 'interpretation' - it is just a machine. It inscribes what it inscribes, and does not inscribe what it does not inscribe. A Turing machine only halts, it does not judge; and the 'world as truth machine' has no such terminal state.
Except, of course, in one sense: that its terminal point, in terms of our present converstation, is whenever now is. This is because it is incoherent to say (in this conversation) that everything we are saying may not be true. It is necessarily true, for this conversation, that something we are saying may be true. (No computation of the machine could be interpreted as revising this, because its contrary is not a valid interpretation of any sayable thing.)
Does this count?
Maybe not: interpretation only takes place within a playable language game - it is an irreducibly semantic 'process'. It can't be restated syntactically. In this conversation, I validly conclude that 'X' is true; but this doesn't give me any guarantees about what 'strings' or 'symbols' ('X', '~X', or others) may be produced by the machine in the future - it only gives me a conclusion about how these could be interpreted by my interlocutors and myself. If 'X&~X' appears in the output, it can only appear as false or reinterpretable; but neither of these represent definite uninterpreted machine states.
Tuesday, September 08, 2009
The Truth Machine
Try thinking of the world as a machine for the production of true statements (including, possibly, if it is true, the statement that the world is a machine for producing true statements...).
We almost need to believe something like this if we are to be motivated to pursue certain scientific projects - it is like the belief that it is possible to do science, to explore the logic of the machine ('God does not play dice').
If it was true, would it have to produce an explanation of why it was true? Only if it was a machine that could produce all the true statements, since this explanation would be one of them. If it produced only true statements, then this explanation need not be among them. Also, (algorithmic information theory?) it couldn't produce a list of which statements it could show to be true.
What kinds of things do we mean by something's being a machine? Being modellable in a Turing machine seems plausible, which means that it must have a series of discrete states, and a 'time' dimension (the number of operations carried out). After any finite number of steps, there would be some unproved theorems which it simply hadn't reached yet. In any finite amount of time, there would be knowable things which were not yet known. There would be things that would eventually be demonstrated which were not yet demonstrated. These could not (?) always be distinguished from the things which the machine formally could not demonstrate - 'not yet halted' and 'will not halt' won't always be distinguishable (because we would need a paradoxical algorithm to completely determine this).
The interesting thing is this, though: If the 'world' is not a Turing machine, we could never find this out. (It certainly wouldn't be a computable question.) And there would be a respect in which we could never 'find this out' because an outcome of the chaos would be that we would have no language in which to formulate the conclusion. The end of intelligibility is a kind of possible future state.
If the world looks like a Turing machine, that's partly because it's the only kind of world we can describe. Which means that as long as we can go on discovering and describing things, we will go on demonstrating that the world looks like a Turing machine - and, possibly, thinking that we can work out its whole logical structure, even while we know that this must be impossible.
We almost need to believe something like this if we are to be motivated to pursue certain scientific projects - it is like the belief that it is possible to do science, to explore the logic of the machine ('God does not play dice').
If it was true, would it have to produce an explanation of why it was true? Only if it was a machine that could produce all the true statements, since this explanation would be one of them. If it produced only true statements, then this explanation need not be among them. Also, (algorithmic information theory?) it couldn't produce a list of which statements it could show to be true.
What kinds of things do we mean by something's being a machine? Being modellable in a Turing machine seems plausible, which means that it must have a series of discrete states, and a 'time' dimension (the number of operations carried out). After any finite number of steps, there would be some unproved theorems which it simply hadn't reached yet. In any finite amount of time, there would be knowable things which were not yet known. There would be things that would eventually be demonstrated which were not yet demonstrated. These could not (?) always be distinguished from the things which the machine formally could not demonstrate - 'not yet halted' and 'will not halt' won't always be distinguishable (because we would need a paradoxical algorithm to completely determine this).
The interesting thing is this, though: If the 'world' is not a Turing machine, we could never find this out. (It certainly wouldn't be a computable question.) And there would be a respect in which we could never 'find this out' because an outcome of the chaos would be that we would have no language in which to formulate the conclusion. The end of intelligibility is a kind of possible future state.
If the world looks like a Turing machine, that's partly because it's the only kind of world we can describe. Which means that as long as we can go on discovering and describing things, we will go on demonstrating that the world looks like a Turing machine - and, possibly, thinking that we can work out its whole logical structure, even while we know that this must be impossible.
Meanings and Rules
If we're going to say something, we need to have a way of working out what it means (well enough to play the move).
Any possible algorithms for computing the meanings of self-referential paradoxical statements don't terminate - there is no way of working out what they mean - so we can never play them as moves in the game. This is guaranteed by the way they are constructed - and they can only arise in this way. "This statement is false" can never intelligibly be asserted; and as it is the possibility of its intelligible assertion which would generate chaos, we are safe from it.
Tractable algorithms are a subset of the algorithms which can be finitely described. Some algorithms which can be finitely described are intractable. They loop; or do not produce a usable expression in a finite period of time (e.g. if required to calculate the exact digital expansion of a transcendental number); or they do not produce a usable expression in a relevant period of time (e.g. soon enough for us to use it).
When we describe an algorithm in words, of course, we must have a tractable method of working out what the words mean in time to be able to use them in our description of the algorithm. This method, in its turn ...
So, of course, we have another paradox. Or another perspective on a familiar paradox.
Can we say that if we follow a method for working out what to say, then it is a method that we cannot describe? In one sense, this is obviously true: it's a practical fact that we can't explain how we speak. A computer which mimics speech cannot also necessarily produce a verbal explanation of how it does this. But is it necessarily the case that it is true?
If the computer that mimicked speech was allowed to read and decompile the code that it was running, it could state the algorithms which it was following in the language in which they were written. It might also store in its memory a plan of its own hardware, a description of the compiler etc.
This might not count as an explanation, however. It would be something similar to what a neurologist interested in speech processing would do - attempting to produce an account of the hardware and software underlying the production of speech as a phenomenon.
Would the computer (or the neurologist) be able to say why what was being said was correct? Could it produce an argument for the reliability of its statements? Presumably, since it could speak (and since this is part of having the capacity to speak), it could produce arguments for the truth of individual statements that it made. But could it produce an argument for it, itself, being a reliable implementation of a speech algorithm? Could it produce an argument that it could, in general, speak properly - that it could, in general, tell the truth?
It can't do this by examining and reproducing its own code - this would only be an account of what it did, not of why it was correct.
If we read a short piece of computer code, we might see that is was correct - the code, for instance, for doing a binary search of a sorted list is almost self-explanatory when written out in a high level language. Even here, though, there is a lot of debate about whether this 'self-explanatoriness' amounts to a proof; and the debate (I think?) virtually comes down to how we select our formalism for calculating proofs. The computer code is also a formalism of a kind, and if it is well-defined, it should produce its own proofs. Programmes written in an formally correct implementation of Lisp, for instance, will look like expressions in the lambda calculus - and if the interpreter/compiler correctly evaluates these (another question, of course ...) the results will be provably correct in that calculus.
But all we have done, even here, is to show that an expression is a theorem of a formal axiomatic system which can be encoded in a computer programme which should (barring hardware faults) correctly implement the rules of the formalism. We can show that the speaking computer is designed to correctly follow the rules that the programmer has coded into it - we cannot show that these are the correct rules unless we allow that to speak correctly is simply to utter the theorems of some FAS or another ...
And since this is something some people might once have believed, its worth saying why it can't be true: for Kripkean reasons, the 'rules' of the system can only be made intelligible within a language which already works. The idea of a 'non-linguistic' rule is incoherent. We might think of 'laws of science' here, but these are part of our descriptive grammar - a world of which it was impossible to speak would have no laws. Our reason for thinking that there are rules 'in the world' is that it is only possible to describe a world of that kind, and we know (a priori) that we can desribe the world in some way. (When we say we cannot we are saying something about the world; and if we try to say this seriously we find we can't even say that we are saying something.)
We can predict, with high reliability, which rules the computer will follow - but we cannot define the rule as 'what the computer does', however much we might, in normal circumstances, practically rely on the computer to do the right thing. Such a definition would make a computer error formally unintelligible.
If we cannot give an account of rule following and formalism without, first of all, being able to give an account of something, then our capacity to give an account cannot be accounted for in terms of our following some specific set of rules. If it could, then we would have to say that it was not possible for the underlying substrate - the hardware, if you like - to make a mistake here. We would be like perfect computers. And we couldn't even test this perfection, because there would be no other standard: There would be no intelligible enquiry that we could undertake, because the very possibility of intelligible enquiry would depend on the reliabiltiy of the hardware.
It is perfectly possible - in fact psychologists have demonstrated that this is true - that the underlying hardware is systematically faulty. People are prone to making certain kinds of cognitive errors: they make incorrect inferences, the are not good at estimating risk etc. What is the standard of correctness that the psychologists are using here? Why do we not think that this standard, as well, could just be the product of some faulty hardware?
We think it because there is an argument for the reliability of this standard - an argument which, if fully developed, would show that the standard depended on the possibility of their being any standards; of our being able to say anything true at all; of our being able to speak. Even if we never fully develop this argument, we promise it when we make a truth claim - and if we find the argument cannot be made, we withdraw the truth claim (on pain of becoming unintelligible otherwise).
It is our ability to speak that is the standard against which the rules are judged, not the rules which guarantee the reliability of what we say. We have been confused by the fact that good rules generate other good rules; so that we think this must be where all rules come from.
And also that allowing the assertion of contradiction or paradox allows the assertion of anything at all, and so of nothing. Our rules must be consistent; and we must only say things which mean something. We might also say that the rules implied by an intelligible discourse are consistent, and that it isn't possible to (properly) say something which is meaningless. If we discover that a previous statement commits us to contradiction or paradox, we reinterpret, revise, or withdraw it.
So could there be 'hidden' programming? If we can make any sense of the idea, it is only as defining a system that we can explore 'from the inside', and never hope to give a full account of.
And it would have to include everything in the world - not just us. It is the whole programme by which the world runs, and there is no machine that it runs on because that machine would, itself, have to be part of the world. It is not even a thing that we can describe, because the description would have to include our description. And a description would have to explain its own veracity - it would have to explain our capacity for truth-telling, and so would generate open questions about its own truth. Or it would have to be in a language in which an account of our truth-telling could be given; and which would (therefore) be unintelligible to us in that respect. All of these are pointers to the void - there is nothing sensible we can say here.
This is why we must both explore and calculate; and explore how to calculate, and calculate how to explore ...
Any possible algorithms for computing the meanings of self-referential paradoxical statements don't terminate - there is no way of working out what they mean - so we can never play them as moves in the game. This is guaranteed by the way they are constructed - and they can only arise in this way. "This statement is false" can never intelligibly be asserted; and as it is the possibility of its intelligible assertion which would generate chaos, we are safe from it.
Tractable algorithms are a subset of the algorithms which can be finitely described. Some algorithms which can be finitely described are intractable. They loop; or do not produce a usable expression in a finite period of time (e.g. if required to calculate the exact digital expansion of a transcendental number); or they do not produce a usable expression in a relevant period of time (e.g. soon enough for us to use it).
When we describe an algorithm in words, of course, we must have a tractable method of working out what the words mean in time to be able to use them in our description of the algorithm. This method, in its turn ...
So, of course, we have another paradox. Or another perspective on a familiar paradox.
Can we say that if we follow a method for working out what to say, then it is a method that we cannot describe? In one sense, this is obviously true: it's a practical fact that we can't explain how we speak. A computer which mimics speech cannot also necessarily produce a verbal explanation of how it does this. But is it necessarily the case that it is true?
If the computer that mimicked speech was allowed to read and decompile the code that it was running, it could state the algorithms which it was following in the language in which they were written. It might also store in its memory a plan of its own hardware, a description of the compiler etc.
This might not count as an explanation, however. It would be something similar to what a neurologist interested in speech processing would do - attempting to produce an account of the hardware and software underlying the production of speech as a phenomenon.
Would the computer (or the neurologist) be able to say why what was being said was correct? Could it produce an argument for the reliability of its statements? Presumably, since it could speak (and since this is part of having the capacity to speak), it could produce arguments for the truth of individual statements that it made. But could it produce an argument for it, itself, being a reliable implementation of a speech algorithm? Could it produce an argument that it could, in general, speak properly - that it could, in general, tell the truth?
It can't do this by examining and reproducing its own code - this would only be an account of what it did, not of why it was correct.
If we read a short piece of computer code, we might see that is was correct - the code, for instance, for doing a binary search of a sorted list is almost self-explanatory when written out in a high level language. Even here, though, there is a lot of debate about whether this 'self-explanatoriness' amounts to a proof; and the debate (I think?) virtually comes down to how we select our formalism for calculating proofs. The computer code is also a formalism of a kind, and if it is well-defined, it should produce its own proofs. Programmes written in an formally correct implementation of Lisp, for instance, will look like expressions in the lambda calculus - and if the interpreter/compiler correctly evaluates these (another question, of course ...) the results will be provably correct in that calculus.
But all we have done, even here, is to show that an expression is a theorem of a formal axiomatic system which can be encoded in a computer programme which should (barring hardware faults) correctly implement the rules of the formalism. We can show that the speaking computer is designed to correctly follow the rules that the programmer has coded into it - we cannot show that these are the correct rules unless we allow that to speak correctly is simply to utter the theorems of some FAS or another ...
And since this is something some people might once have believed, its worth saying why it can't be true: for Kripkean reasons, the 'rules' of the system can only be made intelligible within a language which already works. The idea of a 'non-linguistic' rule is incoherent. We might think of 'laws of science' here, but these are part of our descriptive grammar - a world of which it was impossible to speak would have no laws. Our reason for thinking that there are rules 'in the world' is that it is only possible to describe a world of that kind, and we know (a priori) that we can desribe the world in some way. (When we say we cannot we are saying something about the world; and if we try to say this seriously we find we can't even say that we are saying something.)
We can predict, with high reliability, which rules the computer will follow - but we cannot define the rule as 'what the computer does', however much we might, in normal circumstances, practically rely on the computer to do the right thing. Such a definition would make a computer error formally unintelligible.
If we cannot give an account of rule following and formalism without, first of all, being able to give an account of something, then our capacity to give an account cannot be accounted for in terms of our following some specific set of rules. If it could, then we would have to say that it was not possible for the underlying substrate - the hardware, if you like - to make a mistake here. We would be like perfect computers. And we couldn't even test this perfection, because there would be no other standard: There would be no intelligible enquiry that we could undertake, because the very possibility of intelligible enquiry would depend on the reliabiltiy of the hardware.
It is perfectly possible - in fact psychologists have demonstrated that this is true - that the underlying hardware is systematically faulty. People are prone to making certain kinds of cognitive errors: they make incorrect inferences, the are not good at estimating risk etc. What is the standard of correctness that the psychologists are using here? Why do we not think that this standard, as well, could just be the product of some faulty hardware?
We think it because there is an argument for the reliability of this standard - an argument which, if fully developed, would show that the standard depended on the possibility of their being any standards; of our being able to say anything true at all; of our being able to speak. Even if we never fully develop this argument, we promise it when we make a truth claim - and if we find the argument cannot be made, we withdraw the truth claim (on pain of becoming unintelligible otherwise).
It is our ability to speak that is the standard against which the rules are judged, not the rules which guarantee the reliability of what we say. We have been confused by the fact that good rules generate other good rules; so that we think this must be where all rules come from.
And also that allowing the assertion of contradiction or paradox allows the assertion of anything at all, and so of nothing. Our rules must be consistent; and we must only say things which mean something. We might also say that the rules implied by an intelligible discourse are consistent, and that it isn't possible to (properly) say something which is meaningless. If we discover that a previous statement commits us to contradiction or paradox, we reinterpret, revise, or withdraw it.
So could there be 'hidden' programming? If we can make any sense of the idea, it is only as defining a system that we can explore 'from the inside', and never hope to give a full account of.
And it would have to include everything in the world - not just us. It is the whole programme by which the world runs, and there is no machine that it runs on because that machine would, itself, have to be part of the world. It is not even a thing that we can describe, because the description would have to include our description. And a description would have to explain its own veracity - it would have to explain our capacity for truth-telling, and so would generate open questions about its own truth. Or it would have to be in a language in which an account of our truth-telling could be given; and which would (therefore) be unintelligible to us in that respect. All of these are pointers to the void - there is nothing sensible we can say here.
This is why we must both explore and calculate; and explore how to calculate, and calculate how to explore ...
Monday, September 07, 2009
A deranged speculation ...
What would it be like to be unable to talk? Can we imagine this?
I don't mean here an acquired dysphasia, or a specific neurological damage, but a profound inability - to have never learned, but to be otherwise human.
Given the plasticity of the brain, we would be neurologically different in significant ways. For the sake of the experiment, I'll assume that the language deficiency is the result of an otherwise benign event (hard as this is to imagine) so that there are not other residual neurological effects of trauma.
There have been plenty of cases of children brought up in circumstances where they failed to acquire language, usually for otherwise traumatic reasons; so this thought experiment has some concrete correlatives. They may not serve to settle the speculation I'm going to suggest, though - and perhaps there is no available evidence that could.
The speculation is that the phenomenology of such a person may be almost unrecognisable to a language user; and that certain 'obvious', and even physical categories and preconceptions may be missing.
For instance: without the self-reflection and concept construction capacities that language brings with it, could a person with this deficiency be said to have concepts of time and mortality? They might well show (through their behaviour) the capacity to predict, and to feel fear. Animals do this as well. But what would their fear be of? And could their capacity to predict encompass a picture of their whole life as a sequence of events that form part of a larger partly causal or rational network? Would they even have a recognisable perception of the passage of time?
Our earliest memories are often of events that happen about the time we learn to speak. Is this because we don't have the capacity to organise or reflect on what happens to us in a self-conscious way until we can talk about it?
And if these aspects - time, space, causality etc. - of our phenomenological frameworks depend upon language acquisition, and are absent or seriously impaired in a person who never acquires language, what should we make of questions about their material underpinnings, or their 'reality'?
If my cat doesn't think it's going to die, and doesn't think anything when it does, does it feel as though it lives forever?
[Eipcurus (?) seems to have thought so: "If death is, I am not; If I am, death is not."]
I don't mean here an acquired dysphasia, or a specific neurological damage, but a profound inability - to have never learned, but to be otherwise human.
Given the plasticity of the brain, we would be neurologically different in significant ways. For the sake of the experiment, I'll assume that the language deficiency is the result of an otherwise benign event (hard as this is to imagine) so that there are not other residual neurological effects of trauma.
There have been plenty of cases of children brought up in circumstances where they failed to acquire language, usually for otherwise traumatic reasons; so this thought experiment has some concrete correlatives. They may not serve to settle the speculation I'm going to suggest, though - and perhaps there is no available evidence that could.
The speculation is that the phenomenology of such a person may be almost unrecognisable to a language user; and that certain 'obvious', and even physical categories and preconceptions may be missing.
For instance: without the self-reflection and concept construction capacities that language brings with it, could a person with this deficiency be said to have concepts of time and mortality? They might well show (through their behaviour) the capacity to predict, and to feel fear. Animals do this as well. But what would their fear be of? And could their capacity to predict encompass a picture of their whole life as a sequence of events that form part of a larger partly causal or rational network? Would they even have a recognisable perception of the passage of time?
Our earliest memories are often of events that happen about the time we learn to speak. Is this because we don't have the capacity to organise or reflect on what happens to us in a self-conscious way until we can talk about it?
And if these aspects - time, space, causality etc. - of our phenomenological frameworks depend upon language acquisition, and are absent or seriously impaired in a person who never acquires language, what should we make of questions about their material underpinnings, or their 'reality'?
If my cat doesn't think it's going to die, and doesn't think anything when it does, does it feel as though it lives forever?
[Eipcurus (?) seems to have thought so: "If death is, I am not; If I am, death is not."]
Sunday, September 06, 2009
Is 'the world' driven by a 'hidden' axiomatic system?
It is possible to imagine that the 'real' world is a system which only looks unpredictable to us because we don't know enough about it. Quantum theory seems to contradict this, as it assumes that certain kinds of event are fundamentally unpredictable (e.g. the time at which an unstable nucleus will emit an alpha particle).
There are other issues, though:
Would a 'complete' system also have to predict its own predictions? This is 'algorithmic information theory' territory. The system, and especially its mechanical substrate (if these can be formally distinguished), is part of the world it is trying to predict.
Would it also have to include us? Would our language (and it's mechanical substrate ...) be part of what it could explain? If it did, would it have to include an isomorph for a theory of truth for our language? (Since it would comprise, in some sense, an ultimate 'meta-language').
This has a funny consequence, which is that we could not translate the language of this theory into our own language - because we cannot state a theory of truth for our language in our language. This means that the language of the 'ultimate theory' would be incomprehensible to us, and so, therefore, would the theory.
And if we take a clear Davidson/Quine line on this, we not only can't translate this language, we have no grounds for treating it as a language.
There are other issues, though:
Would a 'complete' system also have to predict its own predictions? This is 'algorithmic information theory' territory. The system, and especially its mechanical substrate (if these can be formally distinguished), is part of the world it is trying to predict.
Would it also have to include us? Would our language (and it's mechanical substrate ...) be part of what it could explain? If it did, would it have to include an isomorph for a theory of truth for our language? (Since it would comprise, in some sense, an ultimate 'meta-language').
This has a funny consequence, which is that we could not translate the language of this theory into our own language - because we cannot state a theory of truth for our language in our language. This means that the language of the 'ultimate theory' would be incomprehensible to us, and so, therefore, would the theory.
And if we take a clear Davidson/Quine line on this, we not only can't translate this language, we have no grounds for treating it as a language.
" ... is true"
If we introduce a new axiom into a system, it may make it inconsistent. It might also do a number of other things:
(1) It might (if it contains a lot of information) make it very complex.
(2) It might make it difficult to interpret (to give a semantic interpretation of).
(3) It might leave the issue of its consistency uncertain - either because
(a) It makes its consistency hard to calculate (complexity issue?) or
(b) It makes the system into one in which Gödel/Turing type concepts can arise. (Is this correct?)
So when I say 'X is true', and think about this as introducing a new rule or axiom, then we might accept or reject this proposal on a number of different grounds. We (my interlocutors and I) need to decide what to do with the proposal ...
What account would I give of a rejection on empirical grounds?
I might compare this with complexity: the new rule makes it very hard to play the game. People may 'explain' this by saying: "But not X!" ("But it just isn't raining!"). This doesn't help to clear the confusion. Perhaps if they said "I can't think what you might mean by insisting that X is true ..." - I don't know what you are committing yourself to; how X fits into the game. I think this would be a better answer, from someone still trying (seriously) to play. Flat contradiction "But not X!" is an end point, not a move in the game (just as, in another context "But necessary X!" would be). It's like saying "I'm not playing your game!" - I'm no longer an interlocutor on that basis.
When we experiment with an axiomatic system, we are interested in whether certain statements are theorems of the system; whether the system makes sense (is consistent?); and whether the system has an interpretation (an 'application').
Chaitin is right that a lesson of the halting problem is that mathematics requires experiments. But what kinds of experiments? Well - experiments with ways of talking. (A defining characteristic of 'real' or 'natural' languages is that they are both the fora for making these experiments and a product of their outcomes).
Serious interlocutors take responsibility for the intelligibility of their truth claims - they do not hand this over to a 'neutral' argument. If we try to make 'X is true' look something like 'X is a theorem of S', we simply move the responsibility from the 'X is true' claim to the 'the axioms of S are true' claim.
Maybe a good way of saying it is to say that when I claim that 'X is true' I also promise to make playing the X move intelligible. I'm prepared to show how it fits in, what adjustments it requires etc.; and to stand by those demonstrations and adjustments.
Within a shared game, of course, some things 'come out' as true, in the way that some statements turn out to be theorems of an axiomatic system. This is inevitable, because the shared game depends on some shared rules, and these rules have consequences. (The shared game must avoid contradictions, but only because a game which admits contradiction has no rules.)
Creating and modifying rules - introducing axioms via the 'X is true' formula - is also part of the game, although certain 'fundamental' rules (among which are the rules which render the introduction of new rules intelligible) cannot be broken. We can search for these fundamental rules (this is doing philosophy) but we cannot expect them to generate the whole system.
(1) It might (if it contains a lot of information) make it very complex.
(2) It might make it difficult to interpret (to give a semantic interpretation of).
(3) It might leave the issue of its consistency uncertain - either because
(a) It makes its consistency hard to calculate (complexity issue?) or
(b) It makes the system into one in which Gödel/Turing type concepts can arise. (Is this correct?)
So when I say 'X is true', and think about this as introducing a new rule or axiom, then we might accept or reject this proposal on a number of different grounds. We (my interlocutors and I) need to decide what to do with the proposal ...
What account would I give of a rejection on empirical grounds?
I might compare this with complexity: the new rule makes it very hard to play the game. People may 'explain' this by saying: "But not X!" ("But it just isn't raining!"). This doesn't help to clear the confusion. Perhaps if they said "I can't think what you might mean by insisting that X is true ..." - I don't know what you are committing yourself to; how X fits into the game. I think this would be a better answer, from someone still trying (seriously) to play. Flat contradiction "But not X!" is an end point, not a move in the game (just as, in another context "But necessary X!" would be). It's like saying "I'm not playing your game!" - I'm no longer an interlocutor on that basis.
When we experiment with an axiomatic system, we are interested in whether certain statements are theorems of the system; whether the system makes sense (is consistent?); and whether the system has an interpretation (an 'application').
Chaitin is right that a lesson of the halting problem is that mathematics requires experiments. But what kinds of experiments? Well - experiments with ways of talking. (A defining characteristic of 'real' or 'natural' languages is that they are both the fora for making these experiments and a product of their outcomes).
Serious interlocutors take responsibility for the intelligibility of their truth claims - they do not hand this over to a 'neutral' argument. If we try to make 'X is true' look something like 'X is a theorem of S', we simply move the responsibility from the 'X is true' claim to the 'the axioms of S are true' claim.
Maybe a good way of saying it is to say that when I claim that 'X is true' I also promise to make playing the X move intelligible. I'm prepared to show how it fits in, what adjustments it requires etc.; and to stand by those demonstrations and adjustments.
Within a shared game, of course, some things 'come out' as true, in the way that some statements turn out to be theorems of an axiomatic system. This is inevitable, because the shared game depends on some shared rules, and these rules have consequences. (The shared game must avoid contradictions, but only because a game which admits contradiction has no rules.)
Creating and modifying rules - introducing axioms via the 'X is true' formula - is also part of the game, although certain 'fundamental' rules (among which are the rules which render the introduction of new rules intelligible) cannot be broken. We can search for these fundamental rules (this is doing philosophy) but we cannot expect them to generate the whole system.
Tuesday, September 01, 2009
A new version
Version 13
The version numbers are fairly meaningless - they just represent the order I put them up in, there is no commensurate scalar improvement in the content.
I haven't left 'marked changes' between this version and no. 12, because the changes were so extensive the markups were meaningless.
I may revert for future versions, but it depends on what they look like.
The version numbers are fairly meaningless - they just represent the order I put them up in, there is no commensurate scalar improvement in the content.
I haven't left 'marked changes' between this version and no. 12, because the changes were so extensive the markups were meaningless.
I may revert for future versions, but it depends on what they look like.
Subscribe to:
Comments (Atom)