If we introduce a new axiom into a system, it may make it inconsistent. It might also do a number of other things:
(1) It might (if it contains a lot of information) make it very complex.
(2) It might make it difficult to interpret (to give a semantic interpretation of).
(3) It might leave the issue of its consistency uncertain - either because
(a) It makes its consistency hard to calculate (complexity issue?) or
(b) It makes the system into one in which Gödel/Turing type concepts can arise. (Is this correct?)
So when I say 'X is true', and think about this as introducing a new rule or axiom, then we might accept or reject this proposal on a number of different grounds. We (my interlocutors and I) need to decide what to do with the proposal ...
What account would I give of a rejection on empirical grounds?
I might compare this with complexity: the new rule makes it very hard to play the game. People may 'explain' this by saying: "But not X!" ("But it just isn't raining!"). This doesn't help to clear the confusion. Perhaps if they said "I can't think what you might mean by insisting that X is true ..." - I don't know what you are committing yourself to; how X fits into the game. I think this would be a better answer, from someone still trying (seriously) to play. Flat contradiction "But not X!" is an end point, not a move in the game (just as, in another context "But necessary X!" would be). It's like saying "I'm not playing your game!" - I'm no longer an interlocutor on that basis.
When we experiment with an axiomatic system, we are interested in whether certain statements are theorems of the system; whether the system makes sense (is consistent?); and whether the system has an interpretation (an 'application').
Chaitin is right that a lesson of the halting problem is that mathematics requires experiments. But what kinds of experiments? Well - experiments with ways of talking. (A defining characteristic of 'real' or 'natural' languages is that they are both the fora for making these experiments and a product of their outcomes).
Serious interlocutors take responsibility for the intelligibility of their truth claims - they do not hand this over to a 'neutral' argument. If we try to make 'X is true' look something like 'X is a theorem of S', we simply move the responsibility from the 'X is true' claim to the 'the axioms of S are true' claim.
Maybe a good way of saying it is to say that when I claim that 'X is true' I also promise to make playing the X move intelligible. I'm prepared to show how it fits in, what adjustments it requires etc.; and to stand by those demonstrations and adjustments.
Within a shared game, of course, some things 'come out' as true, in the way that some statements turn out to be theorems of an axiomatic system. This is inevitable, because the shared game depends on some shared rules, and these rules have consequences. (The shared game must avoid contradictions, but only because a game which admits contradiction has no rules.)
Creating and modifying rules - introducing axioms via the 'X is true' formula - is also part of the game, although certain 'fundamental' rules (among which are the rules which render the introduction of new rules intelligible) cannot be broken. We can search for these fundamental rules (this is doing philosophy) but we cannot expect them to generate the whole system.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment