Wednesday, December 23, 2009
Logic and Interpretation
~(P&~P)
P&~P
Conventionally, the first is a tautology and the second a contradiction. However, we might introduce a 'rule of interpretation' here:
When an expression appears on the left of an ampersand, it means the contrary of what it would mean if it appeared on the right.
Applying this rule, we make the first expression a contradiction, and the second a tautology.
Since it is always possible to distinguish expressions that are sentences, and to distinguish whether they are on one side of an ampersand or the other, it seem likely that this rule would not render the notation inconsistent. By putting '~( ... )' round all sentential expressions on one side of an ampersand we could 'encode' and 'decode' our normal propositional logic.
If this was a rule that a reader might mistakenly apply when doing logical calculations, we would have to warn against it. On the other hand, such a warning would sound mad to a reader who had never thought of such a rule. It would say 'Remember to treat an expression on one side of the ampersand as meaning the same thing as when it appears on the other side'. But this, of course, is another rule of interpretation, just like the first one.
Neither rule is more fundamental than the other. Each resolves an ambiguity which, if we did not see it, would not need resolving.
Not only is it absurd to consider ourselves to be applying a myriad of interpretational rules in order to eliminate unimagined ambiguities, it is also incoherent. In order to say 'I am applying rule X', we need to articulate rule X, and so also interpret it. Not only would every rule, itself, require interpretation; but there would be rules whose correct interpretation would depend upon their own application.
Monday, December 14, 2009
Constructive Semantics
In fact, the OQ argument would suggest otherwise - because we would need to know what these rules meant.
A constructive semantics could be quite rough and ready, and contain many unresolved ambiguities, and still do the work that is required of it (i.e. make semantic determination tractable). What would also be required would be a method for resolving the ambiguities and cleaning up any loose ends that became problems, and this method could be (I think would need to be) experimental.
What we can't tell, however, is whether there are some things that we resolve by experiment that could be worked out from some set of rules that we are (unknowingly?) following. For this to make sense, the rules would have to be discoverable - but the OQ problem doesn't rule this out. It just rules out the discovery of a complete set of rules.
Saturday, December 12, 2009
Some Intentional Facts
Does this make either epistemology or mathematics empirical?
We test whether I know by making experiments. Does the verb 'to know' mean something beyond the quality of these tests?
Yes, but this is because we don't understand what an 'experiment' is - specifically, we don't understand that whether an experiment has taken place, and what it's outcomes are, depend upon semantic considerations that cannot be eliminated, cannot be exchanged for something which looks more mechanical.
Tuesday, December 01, 2009
Language games
We would need some way of ranking outcomes for players, in order to analyse the game.
If we wanted to retain our conception of the game as a language game, the 'moves' would have to translatable into the language we use to describe the game (the one we're using now) as intelligible utterances - as moves in 'the language game' we are now playing. There could be more than one way of doing this, and perhaps no clear way to select between translation schemas.
Describing the rules of the game would not be a possible move in the game. (Although describing the rules of some sub-games would be).
Given a translation schema, we could attach a value to 'truth-telling' - i.e. maintaining general intelligibility and individual credibility. Perhaps.
But our selection of a schema would depend on assuming that players already attached a value to intelligibility?
(Davidson and decision theory?)
Saturday, November 28, 2009
Talking, and talking about talking ...
In one way, yes - because we are exploring what it is possible to say, and so what it is possible to say about the world.
Does this only tells us something about its 'grammatical structure', perhaps - only that it is a place we can talk about, which is much less than we would wish to say?
Except: so many things need to be true for us to be able to talk at all - we might call these things Wittgensteinian 'hinges'. They are true because they follow from the possibility of the conversation which requires them, and they feel true because we know how to participate in that conversation.
What about more 'contingent' facts?
Even commonplace contingencies will beccome hinge-like in certain circumstances. That we put our rubbish out on Wednesdays may seem to be something that could easily be true or false, but a very serious dispute about it would raise issues of meaning and trustworthiness that would threaten the fabric of the conversation in which it was embedded.
It is here that our talk about talking can confuse us:
I might say to you (now, in this posting) 'Imagine you and I watching the bin lorry coming down the street on a Wednesday morning, taking the rubbish from each house, including ours - how could we disagree about what was going on? Would I not have demonstrated to you that the bin lorry came on Wednesdays?'.
This makes the issue sound like one of empirical testing, but only because you understand - and do not seriously dispute - the account that I have just used as an example. To talk about talking, we need, already, to be participating in a mutually intelligible game. We can convince ourselves from this, if we are insufficiently reflective, that we have somehow found its roots. All we have really found, after all, is that we can, in fact, play it. Which is something we must already know.
Does this not threaten (unintelligibly?) to make every true 'fact' a priori?
If any true fact can be a hinge in some circumstance, then any true fact looks necessairly true, at least in that circumstance. Does this lose the distinction between necessity and contingency? Or does it make it 'context dependent'?
And, of course, do the answers to these questions depend on context?
This is a similar mistake. We always have one general context - the context of the conversation we are now having - whose 'hinges' are therefore absolutely a priori. When we, in that context, talk about talking, we can talk about different ways that it might be possible to talk. 'Shall we call Tuesdays Wednesdays?', we might say. This process has bounds - the outcome must be something we can recognise as a language, something we can translate into the language we are using now; using to have this conversation.
(Not forgetting that the judgement that that this is the case will always have a normative aspect to it).
When I ask some empirical questions - e.g. 'What day is the bin collection where Mary lives?' - neither of us might know the answers. We don't know how to talk here.
How do we find out how to talk here? This question has an answer in our game or 'What day is the bin collection where Mary lives?' doesn't make any sense. If I say 'Are there squagworts?' and you say 'What are squagworts, and how would we find out?' I cannot answer 'There is no way of knowing either of these things'. (This can sound like a metaphysical exchange, when the questions and answers are produced in a very serious way, in certain social contexts.)
There can, of couse, be no general way of answering these kinds of questions - there must only be some way. If there were a general way, we could (unintelligibly) ask whether it was correct. We can ask this question about particular ways, but the answers to it must always be particular. A demand for generality here will result in the breakdown of the game: it is like a child's game of 'Why?'.
Unless we say that the general method is the method of exploring how we can talk. And this is not a method at all.
We might make some distinctions between types of question based on what kinds of answers we would accept. We might decide, for instance, to accept each other's reports on certain 'matters of fact' so long as they were accompanied by, or could be accompanied by, an observation account. This doesn't make these observation accounts especially reliable - it is just the way that we play this game. The game 'turns out' to be playable so we feel comfortable with it. Then someone makes a mistake, or tells a lie, and we become uncomfortable. We don't know how to play.
(Remember that I have said, in this post, 'someone makes a mistake, or tells a lie'. As players who do not know that this is what has gone wrong, we just have the resulting confusion.)
Perhaps, of course, you are confused by all of this - by my way of talking here. To the extent that you are, we have failed to talk about talking. We don't yet know why, but we might find out - either I might find out how to be less confusing, or you might show me that this is not possible, that I have made some mistake.
Sunday, November 08, 2009
Consequences ...
Something I hadn't thought of is this: we also need honest and competent interlocutors. Without these, talk is impossible - and so, they must exists. To wonder whether there are any is to wonder whether it is possible to talk - to ask whether there are any is to ask whether it is possible to ask questions.
And the link to 'madness' is clear hear, as well.
Monday, November 02, 2009
Sensory Empiricism
The public artifact of articulated theoretical knowledge - especially, perhaps, modern empirical science - depends not on whether we can take our own sensory inputs seriously, but on whether we can take the sensory reports of potential interlocutors seriously. If we are to be able to talk to each other about the world, we must trust each others' reports about the way it is - or the way it seems. And if it seems the same to everyone - if everyone agrees - then there is no room in our shared game for doubt about the reality. If some Martian meta-philosopher managed to engage us, we might change our minds, but only if we could understand it - only if we and it could share a language game ...
This is the ground of the link between sensory empiricism and the liberal scientific tradition - the practice of taking each others' sensory reports seriously, and articulating this agreement.
Taking our own sensory experiences seriously becomes just a psychological condition, which is exactly Hume's conclusion. He was just a bit confused by his unreflective linguistic competence ...
Sunday, November 01, 2009
The phenomenology of rules and meta-rules
If I suggest to you that we follow a rule, then we must know for sure what the rule means. If we fail to agree here, then we just have confusion - not a counter-hypothesis within the game. This is because our attempts to negotiate the rule must result either (a) in an unambiguous rule (agreement) or (b) in breakdown.
An 'external' observer might, metalinguistically, hypothesise about the breakdown (in a conversation with an appropriate meta-interlocutor...). But this hypothesis would be based on behaviour, and so would be corrigible.
If I (or you) 'hypothesise' that our conversation has broken down, this can only be in a meta-conversation which has not broken down - can only be in a conversation with someone else.
We might have a 'private' hypothesis, here, but only in a derivative way: it would need to be a hypothesis that could, in prinicple, be articulated. This is a problem for any general 'hypothesis' about the playability of language games: a negative judgement here could not, in prinicple, be articulated. It could never be part of a rule in a shared game.
We could not incorrigibly attribute such a private hypothesis to someone else, and we could not entertain it ourselves without a private language in which it could be articulated ...
Monday, October 26, 2009
Explicit rules, tacit rules, and metalanguages
If I am theorising about a language, I am hypothesising about rule following - I am attributing intentional states to the users of the language. If I do this in a 'meta-language' which I do not share with the users of the object language, then my attributions are always corrigible - I can never be sure they aren't quadding.
The only sense we can make of the idea of a tacit rule, is that it is a rule which can be stated in a meta-language. Tacit rules, therefore, can only be identified provisionally (they are like hypotheses from behaviour). If I make a tacit rule of my language explicit, however, this ambiguity disappears. Between interlocutors, the possibility that I may not be stating the rule I appear to be stating is incoherent.
If we think of the rules of logic as being like the tacit rules of a possible language, we can see that the statement of these rules in a meta-language can only be provisional.
But if we state a tacit rule in the object language we do something to the language: we render the provisional definite. This must change the tacit rules of the language - it must change the way it can be described in the meta-language.
An accumulation of changes like this will produce a game very unlike the game we are presently playing. If it can be 'translated' into the game we are presently playing, then the rule changes must, in a a sense, be trivial. if it can't, we have no grounds for calling it a language.
But we can only show this by explicating the rules, and by changing the game ...
Sunday, October 25, 2009
Knowledge and intention attribution
If you say to me 'I know it is raining', the situation is no different. You can tell a lie here (dishonesty) or make a mistake (incompetence), but no particulary difficulties arise.
The difficulty arises when I try to (a) take you seriously as a competent and honest interlocutor and (b) entertain the possibility that you might be wrong. It's another 'Moorean paradox' - I'm trying to interpret your use of 'know' in the 'usual' way, but at the same time believe that you are wrong. In addition, I can't say to you 'You know, but you are wrong' - if I want to retain the usual meaning of 'know'.
The mistake here is to believe that a hypothesis can always be articulated in the language in use, and - of course - the hypothesis that the 'language' can't be used can't be articulated in the language.
The puzzle about the incorrigible status of 'known' facts arises from the circumstance that hypotheses about errors here can't be articulated in a game shared with those making the errors. Which isn't such a deep puzzle. When an interlocutor insists on 'know' and 'false' together, we can't play the usual game with these pieces. Just as 'believe' and 'false' don't work in the context illustrated in Moore's paradox.
We can allow 'know' to entail 'true' in the context of a playable game, because a failure of this entailment would require a change of the rules, not some more metaphysical adjustement of the 'underlying reality'. It is the game which 'breaks down', not the world.
Rules and Gödel
A contradiction should result in rejection of the interpretation hypothesis which entailed it.
Part of this hypothesis, in the context of a logical proof, is about the meanings of the rules. Perhaps, in logic, there is nothing apart from the meanings of the rules.
Will there always be an interpretation which avoids contradiction? Yes, but possibly not always a constructive interpretation - an interpretation which allows novel constructions from the same elements. But of course constructive interpretations must include interpretable rules of construction ... and about what count as 'elements' and 'composites' etc.
Do Gödel's proofs show us that all 'closed' rule based interpretations must be incomplete or contradictory? He and Turing have ruled out computational approaches to certain questions.
But the contradictions just throw us back to the incompleteness hypothesis - that our interpretations cannot protect us from future re-interpretation. Any definite interpretation must leave open the question of its own reliability.
Here is a rule of interpretation:
To you words
understand must vertically,
this read not
rule, the horizontally.
On the basis of what further rule do we demonstrate that we have a correct interpretation of this one? Only that we have avoided contradiction? What rules demonstrate the contradiction?
Phenomenologically, we can have a Wittgensteinian 'Aha!' moment, and know how to 'go on' ... but this is epistemologically irrelevant. It just tells us what a competent language user might feel on such an occasion. It doesn't tell us anything about justification. To try to say, as W seems to, that it's a mistake to think we need justification here is just to introduce a novel kind of justification strategy (a strategy which I, for one, don't have any 'Aha' feelings about ...).
He might say, instead, 'to ask for more is to ask for a justification of justification' - and this is right as a global position, but is not right with respect to particular justificatory strategies, which always seem to be revisable (Quine). What we need is a demonstration that unless this strategy works then no strategy would work - RAA. This seems more elusive ...
Thursday, October 08, 2009
Tacit Rules
A much simpler idea has occurred to me:
The idea of a tacit rule really only makes sense if it can in principle be rendered explicit. Rules have to be statable.
The obviousness (?) of this is reinforced by some reflections on Kripke's rule-following paradox. If it's hard to give a tractable behavioural account of the criteria for having followed a statable rule, what content is left for the concept of an unstatable one?
Consider, also, that we can only get rid of the catastrophic ambiguities which Kripke's paradox generates in the conext of a working language game - where a capacity to agree about rules follows from the playability of the game.
A 'theory of truth' for our language would be a complex rule which was, in prinicple, unartculable. It wouldn't, in any manageable sense, be a rule at all ...
Wednesday, October 07, 2009
Speaking to the rules, logic ...
However, Kripkean ambiguities about whether someone (a speaker of a language, for instance) is following rules are only resolved between interlocutors, and within a playable language game. The judgement that someone is following a rule is always corrigible unless they agree, as an interlocutor, about the rule they are following - at this point a doubt about whether they are following it implies a doubt about their status as an interlocutor, since it would imply incompetent or dishonest use of the language.
We can explore this 'from the inside' by finding rules which could only be rendered ambiguous by risking the playability of the game - e.g. the discusion about whether 'P' and 'P' mean the same thing (below).
But even this exploration is possible, so long as it involves 'second order' investigations - 'mad' questions about hinges can be asked, and this circumstance, itself, examined. And so on. If we try to tidy all of this into a hierarchy of types, we simply create a new 'mad' object of enquiry, which is the theory of types itself, and where statements about it might fit into its hierarchy.
I think what is wrong here is exactly the conception that rule following somehow precedes, rather than is implied by, the possibility of language. Precedes, that is, in some fundamental metaphysical sense. And this is exactly the conception that Kripke's argument undermines, and that Wittgenstein also denies.
It may be true that we can only speak if we can follow rules, but this does not mean that some 'independent' conception of rule following can be formed. We demonstrate rule following in our talk, but only our capacity to talk allows us to articulate it.
But:
While this suggests that we shouldn't be surprised that we get into trouble when we try to render truth-telling in terms of following rules, does it actually explain the necessity of the related paradoxes? I feel that it does, somehow, but can't find a way of articulating this.
Thursday, October 01, 2009
Mathematics and the World
What we find is not that there is some mysterious isomorphism between certain mathematical structures and 'the world'; but that there is a quite intelligible isomorphism between these structures and the tacit (or even explicit) 'grammar' (in the Wittgensteinian sense) of any descriptions of the world that we find we can agree about.
It isn't that the world is mathematically structured in some imponderable way, but that we cannot describe it without using mathematics. And our descriptions come to us so naturally (as competent language users) that we think they, themselves, are isomorphic with some 'phenomenological' structure (our internal sensory and cognitive environment) which seems (to us) to produce them. We forget that we do not share these phenomenological environments, except in the sense that we succeed in sharing our descriptions; communicating our 'observations'.
Saturday, September 12, 2009
Paradoxes
And this will be the case, mutatis mutandis, whatever devices we might construct to deal with specific instances ...
It's easy to avoid the ones with indeterminable meanings (loops); but this can't be made into a rule? Maybe there are too many cases where the determinability of a meaning cannot be determined.
Thursday, September 10, 2009
A 'constructed' (?) machine world
(This doesn't give us an argument for the truth of any specific 'laws', though, except - perhaps - those that make language, minimally, possible).
It is a game of mirrors - we 'look' in the 'world' for laws, and, in so far as we can describe the world at all, we find the laws which make our descriptions possible. These laws, in their turn, can be written down as the code for a machine - we start to think of the world in terms of 'information' and 'algorithms', as though it could 'really be like that'. But this is just another metaphysical metaphor.
And we know this, because we know that no machine constructed in this way could answer certain intelligible questions - such as 'has this machine been properly constructed?' This is the very edge of the machine metaphor. The place where sense and nonsense walk side by side.
Wednesday, September 09, 2009
The Truth Machine (3)
The machine - as metaphor or as formal system - only appears in our theories as a description. In particular, it is described as following rules. (Specified rules, if it is a formal system).
Logicians distinguish between a formal system and its interpretation - we can have 'truth tables', uninterpreted (ones and zeros, Ts and Fs); and we can 'interpret' these as showing the circumstances under which 'Truth' is transmitted from one statement to another.
A difficulty with this distinction is that there is already some interpretation in the formal system. For instance, we reject 'P&~P' as contravening the law of non-contradiction if we regard both instances of P as 'meaning' the same thing. Even more fundamentally: Is 'P' the same symbol as 'P'? Obviously yes? Why?
We recognise them as the same, but they are in different places on the page, and are surrounded by different patterns of other symbols; and there are probably other 'differences'. Without 'same' and 'different' we don't even have a symbolism, and we only have 'same' and 'different' under interpretation.
And we only have interpretation if we already have a semantic framework.
And so we can only give sense to 'state of the machine', as well, within a semantic framework.
The Truth Machine (2)
And these future inscriptions could have, as a valid interpretation, that some earlier inscriptions were false. Inscribed/not inscribed would not be true/false.
It looks at first as though we could have shown to be false / not yet shown to be false; but the statement 'X was shown to be false' is just another such (revisable) inscription.
This is the problem with the metaphor: the machine can have no 'interpretation' - it is just a machine. It inscribes what it inscribes, and does not inscribe what it does not inscribe. A Turing machine only halts, it does not judge; and the 'world as truth machine' has no such terminal state.
Except, of course, in one sense: that its terminal point, in terms of our present converstation, is whenever now is. This is because it is incoherent to say (in this conversation) that everything we are saying may not be true. It is necessarily true, for this conversation, that something we are saying may be true. (No computation of the machine could be interpreted as revising this, because its contrary is not a valid interpretation of any sayable thing.)
Does this count?
Maybe not: interpretation only takes place within a playable language game - it is an irreducibly semantic 'process'. It can't be restated syntactically. In this conversation, I validly conclude that 'X' is true; but this doesn't give me any guarantees about what 'strings' or 'symbols' ('X', '~X', or others) may be produced by the machine in the future - it only gives me a conclusion about how these could be interpreted by my interlocutors and myself. If 'X&~X' appears in the output, it can only appear as false or reinterpretable; but neither of these represent definite uninterpreted machine states.
Tuesday, September 08, 2009
The Truth Machine
We almost need to believe something like this if we are to be motivated to pursue certain scientific projects - it is like the belief that it is possible to do science, to explore the logic of the machine ('God does not play dice').
If it was true, would it have to produce an explanation of why it was true? Only if it was a machine that could produce all the true statements, since this explanation would be one of them. If it produced only true statements, then this explanation need not be among them. Also, (algorithmic information theory?) it couldn't produce a list of which statements it could show to be true.
What kinds of things do we mean by something's being a machine? Being modellable in a Turing machine seems plausible, which means that it must have a series of discrete states, and a 'time' dimension (the number of operations carried out). After any finite number of steps, there would be some unproved theorems which it simply hadn't reached yet. In any finite amount of time, there would be knowable things which were not yet known. There would be things that would eventually be demonstrated which were not yet demonstrated. These could not (?) always be distinguished from the things which the machine formally could not demonstrate - 'not yet halted' and 'will not halt' won't always be distinguishable (because we would need a paradoxical algorithm to completely determine this).
The interesting thing is this, though: If the 'world' is not a Turing machine, we could never find this out. (It certainly wouldn't be a computable question.) And there would be a respect in which we could never 'find this out' because an outcome of the chaos would be that we would have no language in which to formulate the conclusion. The end of intelligibility is a kind of possible future state.
If the world looks like a Turing machine, that's partly because it's the only kind of world we can describe. Which means that as long as we can go on discovering and describing things, we will go on demonstrating that the world looks like a Turing machine - and, possibly, thinking that we can work out its whole logical structure, even while we know that this must be impossible.
Meanings and Rules
Any possible algorithms for computing the meanings of self-referential paradoxical statements don't terminate - there is no way of working out what they mean - so we can never play them as moves in the game. This is guaranteed by the way they are constructed - and they can only arise in this way. "This statement is false" can never intelligibly be asserted; and as it is the possibility of its intelligible assertion which would generate chaos, we are safe from it.
Tractable algorithms are a subset of the algorithms which can be finitely described. Some algorithms which can be finitely described are intractable. They loop; or do not produce a usable expression in a finite period of time (e.g. if required to calculate the exact digital expansion of a transcendental number); or they do not produce a usable expression in a relevant period of time (e.g. soon enough for us to use it).
When we describe an algorithm in words, of course, we must have a tractable method of working out what the words mean in time to be able to use them in our description of the algorithm. This method, in its turn ...
So, of course, we have another paradox. Or another perspective on a familiar paradox.
Can we say that if we follow a method for working out what to say, then it is a method that we cannot describe? In one sense, this is obviously true: it's a practical fact that we can't explain how we speak. A computer which mimics speech cannot also necessarily produce a verbal explanation of how it does this. But is it necessarily the case that it is true?
If the computer that mimicked speech was allowed to read and decompile the code that it was running, it could state the algorithms which it was following in the language in which they were written. It might also store in its memory a plan of its own hardware, a description of the compiler etc.
This might not count as an explanation, however. It would be something similar to what a neurologist interested in speech processing would do - attempting to produce an account of the hardware and software underlying the production of speech as a phenomenon.
Would the computer (or the neurologist) be able to say why what was being said was correct? Could it produce an argument for the reliability of its statements? Presumably, since it could speak (and since this is part of having the capacity to speak), it could produce arguments for the truth of individual statements that it made. But could it produce an argument for it, itself, being a reliable implementation of a speech algorithm? Could it produce an argument that it could, in general, speak properly - that it could, in general, tell the truth?
It can't do this by examining and reproducing its own code - this would only be an account of what it did, not of why it was correct.
If we read a short piece of computer code, we might see that is was correct - the code, for instance, for doing a binary search of a sorted list is almost self-explanatory when written out in a high level language. Even here, though, there is a lot of debate about whether this 'self-explanatoriness' amounts to a proof; and the debate (I think?) virtually comes down to how we select our formalism for calculating proofs. The computer code is also a formalism of a kind, and if it is well-defined, it should produce its own proofs. Programmes written in an formally correct implementation of Lisp, for instance, will look like expressions in the lambda calculus - and if the interpreter/compiler correctly evaluates these (another question, of course ...) the results will be provably correct in that calculus.
But all we have done, even here, is to show that an expression is a theorem of a formal axiomatic system which can be encoded in a computer programme which should (barring hardware faults) correctly implement the rules of the formalism. We can show that the speaking computer is designed to correctly follow the rules that the programmer has coded into it - we cannot show that these are the correct rules unless we allow that to speak correctly is simply to utter the theorems of some FAS or another ...
And since this is something some people might once have believed, its worth saying why it can't be true: for Kripkean reasons, the 'rules' of the system can only be made intelligible within a language which already works. The idea of a 'non-linguistic' rule is incoherent. We might think of 'laws of science' here, but these are part of our descriptive grammar - a world of which it was impossible to speak would have no laws. Our reason for thinking that there are rules 'in the world' is that it is only possible to describe a world of that kind, and we know (a priori) that we can desribe the world in some way. (When we say we cannot we are saying something about the world; and if we try to say this seriously we find we can't even say that we are saying something.)
We can predict, with high reliability, which rules the computer will follow - but we cannot define the rule as 'what the computer does', however much we might, in normal circumstances, practically rely on the computer to do the right thing. Such a definition would make a computer error formally unintelligible.
If we cannot give an account of rule following and formalism without, first of all, being able to give an account of something, then our capacity to give an account cannot be accounted for in terms of our following some specific set of rules. If it could, then we would have to say that it was not possible for the underlying substrate - the hardware, if you like - to make a mistake here. We would be like perfect computers. And we couldn't even test this perfection, because there would be no other standard: There would be no intelligible enquiry that we could undertake, because the very possibility of intelligible enquiry would depend on the reliabiltiy of the hardware.
It is perfectly possible - in fact psychologists have demonstrated that this is true - that the underlying hardware is systematically faulty. People are prone to making certain kinds of cognitive errors: they make incorrect inferences, the are not good at estimating risk etc. What is the standard of correctness that the psychologists are using here? Why do we not think that this standard, as well, could just be the product of some faulty hardware?
We think it because there is an argument for the reliability of this standard - an argument which, if fully developed, would show that the standard depended on the possibility of their being any standards; of our being able to say anything true at all; of our being able to speak. Even if we never fully develop this argument, we promise it when we make a truth claim - and if we find the argument cannot be made, we withdraw the truth claim (on pain of becoming unintelligible otherwise).
It is our ability to speak that is the standard against which the rules are judged, not the rules which guarantee the reliability of what we say. We have been confused by the fact that good rules generate other good rules; so that we think this must be where all rules come from.
And also that allowing the assertion of contradiction or paradox allows the assertion of anything at all, and so of nothing. Our rules must be consistent; and we must only say things which mean something. We might also say that the rules implied by an intelligible discourse are consistent, and that it isn't possible to (properly) say something which is meaningless. If we discover that a previous statement commits us to contradiction or paradox, we reinterpret, revise, or withdraw it.
So could there be 'hidden' programming? If we can make any sense of the idea, it is only as defining a system that we can explore 'from the inside', and never hope to give a full account of.
And it would have to include everything in the world - not just us. It is the whole programme by which the world runs, and there is no machine that it runs on because that machine would, itself, have to be part of the world. It is not even a thing that we can describe, because the description would have to include our description. And a description would have to explain its own veracity - it would have to explain our capacity for truth-telling, and so would generate open questions about its own truth. Or it would have to be in a language in which an account of our truth-telling could be given; and which would (therefore) be unintelligible to us in that respect. All of these are pointers to the void - there is nothing sensible we can say here.
This is why we must both explore and calculate; and explore how to calculate, and calculate how to explore ...
Monday, September 07, 2009
A deranged speculation ...
I don't mean here an acquired dysphasia, or a specific neurological damage, but a profound inability - to have never learned, but to be otherwise human.
Given the plasticity of the brain, we would be neurologically different in significant ways. For the sake of the experiment, I'll assume that the language deficiency is the result of an otherwise benign event (hard as this is to imagine) so that there are not other residual neurological effects of trauma.
There have been plenty of cases of children brought up in circumstances where they failed to acquire language, usually for otherwise traumatic reasons; so this thought experiment has some concrete correlatives. They may not serve to settle the speculation I'm going to suggest, though - and perhaps there is no available evidence that could.
The speculation is that the phenomenology of such a person may be almost unrecognisable to a language user; and that certain 'obvious', and even physical categories and preconceptions may be missing.
For instance: without the self-reflection and concept construction capacities that language brings with it, could a person with this deficiency be said to have concepts of time and mortality? They might well show (through their behaviour) the capacity to predict, and to feel fear. Animals do this as well. But what would their fear be of? And could their capacity to predict encompass a picture of their whole life as a sequence of events that form part of a larger partly causal or rational network? Would they even have a recognisable perception of the passage of time?
Our earliest memories are often of events that happen about the time we learn to speak. Is this because we don't have the capacity to organise or reflect on what happens to us in a self-conscious way until we can talk about it?
And if these aspects - time, space, causality etc. - of our phenomenological frameworks depend upon language acquisition, and are absent or seriously impaired in a person who never acquires language, what should we make of questions about their material underpinnings, or their 'reality'?
If my cat doesn't think it's going to die, and doesn't think anything when it does, does it feel as though it lives forever?
[Eipcurus (?) seems to have thought so: "If death is, I am not; If I am, death is not."]
Sunday, September 06, 2009
Is 'the world' driven by a 'hidden' axiomatic system?
There are other issues, though:
Would a 'complete' system also have to predict its own predictions? This is 'algorithmic information theory' territory. The system, and especially its mechanical substrate (if these can be formally distinguished), is part of the world it is trying to predict.
Would it also have to include us? Would our language (and it's mechanical substrate ...) be part of what it could explain? If it did, would it have to include an isomorph for a theory of truth for our language? (Since it would comprise, in some sense, an ultimate 'meta-language').
This has a funny consequence, which is that we could not translate the language of this theory into our own language - because we cannot state a theory of truth for our language in our language. This means that the language of the 'ultimate theory' would be incomprehensible to us, and so, therefore, would the theory.
And if we take a clear Davidson/Quine line on this, we not only can't translate this language, we have no grounds for treating it as a language.
" ... is true"
(1) It might (if it contains a lot of information) make it very complex.
(2) It might make it difficult to interpret (to give a semantic interpretation of).
(3) It might leave the issue of its consistency uncertain - either because
(a) It makes its consistency hard to calculate (complexity issue?) or
(b) It makes the system into one in which Gödel/Turing type concepts can arise. (Is this correct?)
So when I say 'X is true', and think about this as introducing a new rule or axiom, then we might accept or reject this proposal on a number of different grounds. We (my interlocutors and I) need to decide what to do with the proposal ...
What account would I give of a rejection on empirical grounds?
I might compare this with complexity: the new rule makes it very hard to play the game. People may 'explain' this by saying: "But not X!" ("But it just isn't raining!"). This doesn't help to clear the confusion. Perhaps if they said "I can't think what you might mean by insisting that X is true ..." - I don't know what you are committing yourself to; how X fits into the game. I think this would be a better answer, from someone still trying (seriously) to play. Flat contradiction "But not X!" is an end point, not a move in the game (just as, in another context "But necessary X!" would be). It's like saying "I'm not playing your game!" - I'm no longer an interlocutor on that basis.
When we experiment with an axiomatic system, we are interested in whether certain statements are theorems of the system; whether the system makes sense (is consistent?); and whether the system has an interpretation (an 'application').
Chaitin is right that a lesson of the halting problem is that mathematics requires experiments. But what kinds of experiments? Well - experiments with ways of talking. (A defining characteristic of 'real' or 'natural' languages is that they are both the fora for making these experiments and a product of their outcomes).
Serious interlocutors take responsibility for the intelligibility of their truth claims - they do not hand this over to a 'neutral' argument. If we try to make 'X is true' look something like 'X is a theorem of S', we simply move the responsibility from the 'X is true' claim to the 'the axioms of S are true' claim.
Maybe a good way of saying it is to say that when I claim that 'X is true' I also promise to make playing the X move intelligible. I'm prepared to show how it fits in, what adjustments it requires etc.; and to stand by those demonstrations and adjustments.
Within a shared game, of course, some things 'come out' as true, in the way that some statements turn out to be theorems of an axiomatic system. This is inevitable, because the shared game depends on some shared rules, and these rules have consequences. (The shared game must avoid contradictions, but only because a game which admits contradiction has no rules.)
Creating and modifying rules - introducing axioms via the 'X is true' formula - is also part of the game, although certain 'fundamental' rules (among which are the rules which render the introduction of new rules intelligible) cannot be broken. We can search for these fundamental rules (this is doing philosophy) but we cannot expect them to generate the whole system.
Tuesday, September 01, 2009
A new version
The version numbers are fairly meaningless - they just represent the order I put them up in, there is no commensurate scalar improvement in the content.
I haven't left 'marked changes' between this version and no. 12, because the changes were so extensive the markups were meaningless.
I may revert for future versions, but it depends on what they look like.
Monday, August 31, 2009
Metaphysical comforts
What would an argument for realism (or God, or Unicorns) achieve? It would show that our language game would only be intelligible if there were real things (or Gods, or Unicorns) out there; or that the real things had to be organised in a certain way; or that they would have to have certain properties. But the phenomenological reassurance we need cannot be supplied by such an argument because the argument cannot address our anxieties about the possibility of intelligiblity itself - about the possibility of producing arguments.
This is a 'nonsensical' anxiety, but only in the sense that as as soon as it is articulated it is shown to be unintelligible - because articulating it is unintelligible. But how do we privately reassure ourselves about this?
Wittgenstein is correct - the need for such a reassurance is pathological, it is a kind of mental illness.
But some of us are mentally ill - we have drifted beyond the scope of interlocutorship, to coin an ugly word. It is also clear that some 'illnesses' of this kind can be ameliorated, or abated, by 'talking cures' - counselling, psychotherapy - which clearly lead the subject back into the realm of the sane, back within the scope of our comforting shared conversation. In extreme cases, these therapies must work by showing, in the Wittgensteinian sense, since the capacity for 'reason' (the capacity to participate in the conversation) has lapsed in some way.
Whether there must always be an island of intelligibility at the centre of a therapeutic exploration, the possibility of this not being the case has to be entertained. A child learns language without first of all knowing how to speak - there is no prima facie reason why a seriously deranged person might not be in this situation, though their behaviours and interactions would not be child-like in other ways.
Do we think of learning/re-learning language as being like acquiring/re-acquiring (?) a comforting pattern of (complex) behaviour? Does playing this game soothe us?
Perhaps, because of the kind of animals that we are, it does. We can tell all sorts of stories about why it might - why, even, it makes sense that it might - but these are only going to be intelligible (in any sense) to those who can already play; the comfort comes before the intelligibility.
If we have a story that says 'And now, of course, I can explain why I should be comforted by this', we are missing the point. We have already forgotten too much to be able to understand the initial problem - we have not addressed the terror, we have just learned a way to dance around the campfire that allows us to ignore it. An argument for a fundamental ontological category, or condition, can only be a story like this.
We might think we can comfort ourselves by demonstrating the existence of God, or substance, or their co-extension, but none of these can demonstrate the reliability of the game of intelligibility within which our demonstrations work - only an unreflective, unselfconscious, probably instinctive, language user would even think that they might.
Note the odd frustration one feels attempting to persuade someone who is insane or otherwise irrational: their imperviousness to argument can seem threatening. They are doing the impossible, we think. But only because walking out of the light of the campfire is as good as impossible to us. There is no 'out-there' beyond the light of what can be discussed - we prove this to ourselves, and remark knowingly on the unsayability of the unsayable.
But once the game stops being comforting - when the models of intelligibility on offer are more terrifying than the darkness, or the familial campfire represents cognitive dissolution - then madness becomes intelligible practice, however unintelligible its utterances.
When a philosopher needs comfort, he or she should apply to a therapist, not to an argument. The argument will only show that it can work as an argument - it won't show you that you have; or will always be able; or can require someone else; to play the game.
And this problem is more than theoretically interesting, because it inspires errors. The right answer is not necessarly metaphysically comforting - unless one makes a metaphysical fundamental of argument, and this would be as much a mistake as anything else. An ontological 'rationalist' is as wrong as an ontological materialist, in this context.
If our game is only intelligible on the presumption of God's existence, or the grammatical structure of physical reality, then we may have a demonstration of these metaphysical fundamentals but at the cost of rendering them irrelevant to the psychological problem.
If you want the comfort of God's existence, an existential argument will only convince you if you already get at least as much comfort from the reliability of argument. Even to be convinced by miracles, one would need a language of miracles within which arguments from miracles could be constructed. Without this language, how would we know a miracle from anything else?
Would a cat become devout if it saw an angel? If we believe it would, it is only because we can tell an intelligible story about why it would be so, or because such a picture pre-figures any intelligible exchange we might imagine having.
We can either have comforting metaphyscial presuppositions or we can demonstrate them from the possibility of intelligible conversation. We can't do both. An argument for ontological realism cannot underpin the reliability of empirical argument because it is already an empirical argument. A scientist whose ontological realism is an indispensible heuristic can only articulate that realism 'metaphysically' (or 'nonsensically', in the Wittgensteinian sense - i.e. as a hinge, for which it is incoherent to require an argument).
When normal people talk this way - when they do things like articulating hinges - they often believe that they are talking 'philosophically'. 'Real' philosophers indulgently point out to them that philosophy is about constructing arguments, while, of course, surreptitiously seeking comfort where it cannot be found ...
Wednesday, August 26, 2009
Doing things with words
I think that he was more interested in what we can say than in what we do say. His examples aren't, perversely, meant to be exemplary - they are meant to be exploratory.
Not 'This makes sense' but 'Does this make sense?'.
He was showing us things, and asking whether we could see how the game was played - and how it might be developed.
Hinges and Fundamentals
The argument is a modus tollens, not a modus ponens: If the hinge was false, then we could not speak (this way) - we can speak (this way), therefore the hinge is true.
In OC Wittgenstein cannot explain the reliability of the hinges:
470:
"Why is there no doubt that I am called L.W.? It does not seem at all like something that one could establish at once beyond doubt. One would not think that it is one of the indubitable truths.
[Here there is still a big gap in my thinking. And I doubt whether it will be filled now.]"
Then in 474:
"This game proves its worth. That may be the cause of its being played, but it is not the ground."
But to give a ground is to make a move in the game. To ask whether the game has a ground is to ask, if it is to ask anything, whether the game might not be playable if there were no ground. However, the game must be playable because we cannot say that it is not. The absence of a ground would demonstrate the absence of need for a ground, if it demonstrated anything - if the game can be playable without a ground, then grounds are not required for this kind of conclusion.
And since any prospective ground could only be articulated within the game, the ground could not be more reliable than the general possibility of giving grounds. We cannot produce an argument for the reliability of argument.
So: a ground is not necessary, and no ground of the right kind can be given. We have to start from the playability of the game.
Sunday, July 26, 2009
Something that can't be explained
Why should I have special access to some of the world's events and not to others? Why am I aware of any of them? Why not all of them?
We can sort of imagine (or maybe only think we can imagine) an impersonal physical universe - not LaPlacian, perhaps, but some more up to date equivalent. We might even say things like 'you are a part of this universe - a sub-system - which interacts with the rest in a specific way, and therefore is only "aware" of some events and not of others'. But this doesn't solve the problem of why I am this particular sub-system and not some other. Why am I not you?
Externally, of course, lots of kinds of explanations can be suggested. Psychologists and neurologists looking for a 'seat of consciousness' or for some systemic explanation of consciousness focus on behaviours and cognitive structures characteristic of those of us who think of our selves as conscious and of those others we attribute consciousness to.
This doesn't help very much, though, because these behaviours are only accessible to theory once they have been described, and they are - in some sense - associated with all conscious beings. What I'm interested in is my consciousness, not everyone else's. I want to know why I'm aware of the world I'm aware of. And much of this world is not only unverbalised, its unverbalisable.
So far as my participation in this conversation is concerned, of course, I could be anyone - or at least anyone like me. I wouldn't actually have to be me. If I was replaced by a cunning automaton, who would know except for myself? I might, like a Stepford wife, be ousted by a murderous doppelganger - and it might be ousted in its turn - for all this conversation was concerned.
But it would, surely, concern me. I wouldn't be here any more.
Does this make any sense? Well, not in this conversation - it is a 'hinge' of this conversation that I am the same person who wrote the previous posts in this Blogg.
Is this being a 'hinge' a solution to the problem?
In one sense yes, because it points to the public conception of a continuing person with the ability to converse. We can even understand (in this public world) why some people know some things that other people don't.
But in another sense, it just evades the question. Why am I sitting typing at this keyboard, looking at this screen. Not a public puzzle, but a question about my particular phenomenology.
Why was my world constructed with this subject? There is nothing in our shared world that can explain this to me.
Is this the same as the Zombies problem?
No. Whether or not we treat each other like zombies is a normative issue. I don't think mind theorists take the normative aspects of mind attribution seriously enough, but that's another problem. I can't treat myself as a zombie, so this question is not the same as the zombies problem.
In a sense, posing this question is a poetic move in this conversation - I am suggesting something, alluding to something, that cannot really be said; and I am doing this by appearing to say it.
(I can put the idea that it may not be possible to talk into your head by saying 'It is not possible to talk' even if this is strictly nonsensical).
Whether or not you can engage with this may have something to do with how self-conscious you are of your linguistic abilities - how much you are aware of talking as an activity depending on an inarticulable substrate of skills ...
Thursday, July 16, 2009
Propositions and Rules?
It's hard to say this, though because whether or not we are dealing with a proposition or a rule depends on what game we are playing - or, rather, on which part of the 'overall' game we are playing. And the game we're playing determines what we can say and what we can't say ...
When we move from one game to another, we indicate that we have done so by using explicit truth-predicates, which, I think, are always used in a rule-like way. When I say 'My cat is hungry' I mean that my cat is hungry. When I say 'It is true that my cat is hungry' I am telling you how I am going to talk, and inviting you to talk the same way. You might respond by pointing out that this rule won't work for some reason, or you might find you don't know how to play the game I seem to be proposing, but while the first statement can legitimately be described as being about the world (with it's hungry cat), the second cannot.
Thinking that we are talking about the world when we use truth predicates, rather than about how we are going to talk, is one of the most fundamental errors that philosophers have made.
Tuesday, May 19, 2009
Ramsey's Ladder, again
We want to preserve the first rung, but not the rest. We want to leave S meaningful, or rather usefully meaningful (!) where S="X is true"; but we want to leave "S is true" vacuous.
Rules, of course, work like this. If S="It is a rule that X", this means something. But to say "It is also a rule that S" is vacuous. We don't have a 'master' rule that says we must follow the rules. To propose something as a rule is just to propose that it should be complied with (with contextual qualifications, and under agreement).
If statements of truth values are, or are importantly like, statements of rules, then statements attributing truth or falsehood to statements of truth values will always be vacuous.
The traditional liar paradox is a statment about the truth value of itself: "This statement is false" contains a statement about a truth value. Taken as a rule, it is simply incomprehensible: there is no answer to the questions 'What way does this rule require us to talk?'.
And it only appears to be consequential if we allow the rule that we should follow the rules ...
Is it true that we should tell the truth (generally, with appropriate caveats etc.)? It's vacuously true, in a playable game. Its falsehood can't be a move in a playable game.
And:
"We should always talk as though this statement is false" is incomprehensible at the first level - it simply cannot mean what, at first sight, it pretends to mean. It tells us to do something that is logically impossible. (What it appears to mean is something that would falsify an interpretive hypothesis, in the Davidsonian sense).
Wednesday, May 06, 2009
Cartesian Anxiety
The inexpressible hyperbolic doubt that we are always talking at cross purposes.
Pace Dante, the gates of hell bear no intelligible warning.
Sunday, May 03, 2009
Interpreting behaviour
It is natural to think, here, of there being (a) the behaviour and (b) a set of interpretations. But we shouldnt make the mistake of thinking that there must, therefore, be a way of describing the 'uninterpreted' behaviour. The only descriptions we may have might be interpretations, and the 'sameness of behaviour' might only consist in our agreement that the interpretations are interpretations of the 'same' behaviour.
We may demonstrate this 'sameness' by identifying key aspects of the situation ('navigation markers') - such as time, place, actors etc. - but these are not descriptions of behaviour, and might not include anything which told us what was actually happening.
We might imagine, also, that some mechanical descriptions of limb movements and interactions with objects might somehow 'capture' the raw elements without introducing an interpretation. Computer simulations are based on models which encode this kind of information, but they are extremely difficult to interpret as 'descriptions' - they contain a great deal more information than our normal descriptions, and it is organised in a way which is hard for a human being to decode.
While the possibility of this modelling language might have been predicted from our judgements of sameness (and the existence of the 'navigation markers') , it certainly hasn't played any important role in producing judgements or demonstrations of sameness.
Also: it isn't clear that any such language really could capture all of the things which contribute to our actual arguments for sameness (where these are produced). The navigation markers relevant to a particular demonstration may be idiosyncratic, and missing from the the encoded description. We may not refer to time and place or personae but infer these from some other data which are not generally included in the descriptions encoded in the modelling language.
So what we have is just interpretations and the judgement of sameness. And we can make this judgment reliably - it would be incoherent to suggest that we don't. We also produce intelligible arguments for sameness which do not depend on 'full descriptions', or anything like these.
Wednesday, April 29, 2009
Speaking Correctly
Instinctively, we say - we could exclude meaningless utterances/recordings/artefacts (... etc.).
So does 'speaking correctly' require only that we follow the 'rules of meaningfulness'? (And if, for isntance, we allow false statements only as the contraries of true ones, and let Duns Scotus deal with the issue of trying to make them work as serious assertions ...).
This is no better, because we need to know the meanings of the rules in order to follow them correctly and also because we cannot know (Kripke) what rule someone is following simply from their behaviour: and we we would need to know this in order to know that we knew what they were saying. And we must, broadly, know what they are saying if we are having a conversation with them. (E.g. discussing meaning).
And there is another problem: we might say that two 'expressions' mean the same thing if they are correct translations of one another. This could be a practical matter - e.g. we use them interchangably in certain contexts; or a theoretical one - we say that they mean the same thing. This second option is another move in the game; and the first is identified by a move in the game - a description of the relevant practices.
In order to speak correctly we follow certain syntactical rules, but this is neither (ultimately) sufficient nor necessary. We can create new rules and we can follow the rules meaninglessly.
Is there a general limit to this?
Not one that can be stated, because if we try we will generate an open question paradox.
Are there particular limits (boulders in the torrent, to which we can cling or against which we would break)?
There are the rules of logic - whose contraries immediately generate meaninglessness by generating contradictions - but whose scope of application can only be defined after we have agreed on certain interpretations, after we have agreed on what would count as conforming to or breaking the rules. ~(P&~P) is only 'obvious' if both instances of 'P' mean the same thing (and there are deeper, more inarticulable issues of interpretation as well).
We may, in other words, reduce syntax to semantics; we can never reduce semantics to syntax. We can only state syntactical rules, and we can only make unequivocal judgements about whether or not they are being followed, in the context of a meaningful conversation.
What kind of judgement is it that we are engaged in a meaningful conversaion?
It's a normative judgement on the part of the individual participants, but it is not expressed within the conversation (where it would be vacuously true, where its contradictory would be nonsense) - it is instead expressed by their participation. A doubt about it would bear on this, and a serious doubt would lead to withdrawal (not to some absurdity such as 'you are all talking nonsense', except as a rhetorical exaggeration ...).
This withdrawal might express itself in a number of ways, including an apparent, but dishonest, continuation: 'participation' with purely instrumental goals, for instance.
This is the real Cartesian anxiety - not that we are beign deceived by our senses, but that we are being deceived by one another. The first might worry philosophers, but the second could drive anyone mad - and there is no philosophical speculation which could entertain it.
Tuesday, April 28, 2009
Transcendental arguments ...
There are no reliable moves in an invalid argument (Duns Scotus).
Is it an axiom? Or is it a statement of the possibility of there being some axiomata?
What if we found that some some axiom A was (a) coextensive with V with respect to arguments or (b) was a necessary condition of 'some argument is valid' or (c) was a consequence of 'some argument is valid' but not of some other (independent) statement?
It's hard to imagine (a) being the case without this being a consequence of either (b) or (c), except that logical rules (symbol manipulation rules) look like possible candidates. Without them, it's hard to see how (b) or (c) could be demonstrated and there is some sense in which the scope of these rules is the same as the scope of valid argument (and therefore of V).
A logician might argue that V contains an undefined notion of validity, and so is 'meaningless'.
This is only the case if the definition is absent - not if it is simply incomplete. In a natural language, the concept of validity is both funtional and incomplete. Assertions of validity or invalidity may be stipulative ("This is the way we should talk."), or experimental ("Let's try talking this way.") But these stipulations and experiments may work or not work; and with them the language itself (if the breakdown is widespread) and so any possible concept of 'validty' at all.
Logic works with rules, but the language in which the logical rules are stated cannot be analysed as rule based.
Saturday, March 28, 2009
Truth and honest practice
The bomb disposal example (see "Meaning and 'transmission'") illustrates this. Honest practice here clearly excludes truth telling.
Monday, March 16, 2009
Things and Words
When I say that I can say 'The cat is hungry' - that this is a legitimate move in the language game I am playing (i.e. that it is 'true' for that game) - because of the way the world is, I am not saying that I can say 'The cat is hungry' because the cat is hungry. This is a kind of category mistake - or a mistake about the meta-level we are dealing with.
Of course it is true that if I say (seriously) 'the cat is hungry' I must also believe that the cat is hungry (or I'm not playing the game properly). For the purposes of that game, I must believe that this is the way the world is. But that is only because I am using this language to describe the world, and not some other language - e.g. the language I would use to explain how this language (the one in which the cat is hungry is a fact) works.
If I try to explain how this language works, I end up saying silly things like "'The cat is hungry' is true because the cat is hungry" (or their equivalent). Or, I say: "'The cat is hungry' is true in this language game because the world allows this language game to work". And the second statement doesn't say anything specific about the world except that we can talk about it the way we do, in fact, talk about it ...
Maybe this is 'the fact' of the slingshot - I would need to look at this again. It's where we go when we try to make general statements about what must, ultimately, be true in order for anything to be true. We end up with just this one feature of the world: that something can be true about it.
Saturday, March 14, 2009
Two Dogmas
In fact, the only 'fundamental' thing we can say about the world is that it allows us to speak the way we do: and we can know this a priori, because we can know (in the context of speaking to one another) 'we can speak about the world' a priori.
If we try to say anything else, if we propose some other metaphysical fundamental, we are introducing a new constraint. We can do this with respect to more constrained games (for which these constraints might be hinge propositions), but in the most general cases we are caught by the 'as though' problem: that no philosophical importance can be attached to distinguishing between, for example, 'There are real physical objects in the world' and 'we can talk as though there are real physical objects in the world'. 'We can talk' and 'We can talk as though we can talk' is the only general circumstance in which the metaphysical implication is direct - the second statement is just a repetition of the first.
Thursday, March 12, 2009
Syntax and Semantics
But we only get meaning through interpretation - the different states of the cubic centimetre of air are only interesting if we can distinguish them - by their consequences, or by decoding them in different ways.
This may be difficult with the gas example. We might think the computer example is easier - we can list the states of all the independent storage elements, and show where the differences lie.
But this doesn't meant that we can 'reverse engineer' all of these states and represent them as the intelligible output of a programme - even one with errors in in it.
Some states cannot arise as a result of programming errors (at any level) and can only result from hardware faults, but a single hardware fault might result in a large number of possible memory states, so that we only need to recognise them as members of a class in order to find the fault - we don't need to be able to identify and interpret each one separately.
Wednesday, February 11, 2009
The overall plan ....
(1) For broadly Davidsonian reasons, as well as others, we can roughly identify our 'conceptual scheme' with the conversation we are presently engaged in, and since we identify other things as conversations only if we can translate them, we end up with, in a rather ragged way, only one conceptual scheme. If we think it might have inconsistencies, we can try to find them & eliminate them. We have no intelligible way of speculating about what kind of language game this process would result in, because we would need to be able to translate it into the language we use now in order to indulge this speculation; and this is something we cannot, by definition, do.
(2) Although synthetic a priori beliefs may be elusive, synthetic a priori statements are easy to construct, as we can make general statements about the possibility of language: 'We can speak' is either true or not a statement. We need to be able to make assertions, because to deny this is to make an assertion. If we can make assertions, then 'We can speak' has the corrollary 'We can tell the truth'.
(3) Generic theories of truth (of how to speak) are blocked by the open question argument. Statements that it isn't possible to know we're telling the truth are self-refuting. So we must be able to know we're telling the truth without being able to say why, in order for language to work: and since we can assert the latter a priori, the former must be the case.
(4) The structure of fundamental arguments now becomes recursive: we can never argue 'X must be true because it is a theorem of my theory of truth', and we certainly can't argue 'X must be true because it isn't possible to tell the truth with certainty'. The only alternative is the transcendental argument: 'X must be true or it would not be possible to tell the truth'.
(5) Since 'We can speak' is an empirical statement, it is possible to construct fundamental empirical arguments transcendentally: Either Empirical(X) is true or it is not possible to make empirical statements. This is the bridge from reason to the world.
(6) While it may be possible to construct a transcendental argument (as in 5) for the reliability of the senses, this would not rescue traditional sensory empiricist epistemology - it would show that the reliability of the senses depended upon the reliability of empirical statements, and not vice versa.
Some stages obviously need elucidation (!) and there are lots of interesting consequences and further developments...
But the main elements are all here.
Sunday, February 08, 2009
Another take on the Open Question Argument?
Another way of putting this: Sharing an articulated theory of truth is also sharing an interpretation of such a theory, and our only grounds for saying that we share an interpretation are the same grounds we have for saying we share a tacit theory of truth - that we can can continue talking to one another. And within the language game we are using to converse, we know this a priori. If the interpretations diverge, the game breaks down - and we don't even have anything with which to say that it has broken down, far less discuss our varying interpretations.
(We do, obviously, discuss varying interpretations of some linguistic moves - but only within a shared framework of truth-telling.)
Friday, January 30, 2009
Phenomena
We think we 'really' share them because we can talk as though we do.
They seem fundamental because we seem to 'refer' to them introspectively. But there is no reason why any two individuals' internal states should be 'commensurable' beyond what is required for mutual intelligibility.
This is why there can be no 'science' - no articulated theory - of consciousness. We do not share the aspects which we would need to articulate in order to make such a science possible.
Saturday, January 24, 2009
Reminiscence ....
"Kripke’s paradox cannot have the consequence that we cannot be sure that someone who states it is following the rules that make its statement intelligible."
It's really the solution to the paradox, and also an explanation of how we should think about language and certainty.
Thursday, January 22, 2009
Introspection
If you think 'what do I feel about what it would be appropriate to say, here?', then you are likely to come up with the right answer, since knowing the right thing to say is just what competent language users can do.
But the reliability of these introspections is misleading: it can make us think that these feelings and internal processes are at the root of the reliability of the move we feel is appropriate, or even of the language game itself.
(In order to show that a particular move in the game is reliable, or correct, we must produce an argument, not a feeling. It is incoherent to ask for an argument for the reliability of the whole game: an argument for the reliability of argument would not demonstrate anything.)
And this observation extends to our empirical intuitions as well as our 'language instinct'. It is not our sensory perceptions which make us reliable theorists - it is our ability to agree with each other about them. Our theories are linguistic artifacts.
Our ability to act in a rational way - an ability any moderately sentient animal exhibits to a greater or lesser extent - is not an ability to theorise, except under interpretation. If I describe a cat as having a theory, I am describing its behaviour in a certain way - as having a certain normative content. There will always be an alternative 'cognitive', or computational (homeostatic?) interpretation which elminates this content. This isn't possible in the case of a person who articulates a theory, because we can only produce a computational account of this behaviour by reducing the linguistic performance to something mechanical. Articulating a theory is participating in a conversation - we don't automatically do this by moving our jaws in certain ways and producing certain sounds.
I might, in a very complex conversation with you, give you an account of how your physiology and neurology instantiated certain cognitive/computational process in such a way as to produce the output we recognise as 'speech'. But when you said 'Ah yes, I see', I would hardly respond 'And there's another example of it working'.
Wednesday, January 14, 2009
Belief, Interpretation, and Criticism
This means that I can't just 'accept' it. There is no belief without interpretation.
And there is no interepretation without critical interpretation. If I wonder 'what sense does this make?' then I am wondering 'how can this make sense?' This is the same as trying to figure out how I can continue the conversation with you.
Just because the outcomes of these reflections can seem obvious, and the reflections themselves 'subconscious', they are not therefore 'given' in any useful sense. They are the outcomes of cognitive processes which, if we are not able to articulate them, are still our 'practice'. They are things that we do.
In this sense, I cannot believe you without critically interpreting what you say: without, in the literary sense criticising you.
Whatever private reservations we might have, and however we might (again privately) engage only provisionally in a conversation, there are only some interpretations which are consistent with it being a conversation. When we articulate a criticism, therefore, it must be consistent with the possibility of the conversation within which it is articulated.
This means that there are some criticisms which are wrong (inconsistent with the the possibility of their supporting conversation), and some which are not-wrong. A disjunctive list of the not-wrong criticisms must be a correct criticism - however qualified and inconclusive.
The consequences of this may be trivial, however. Northanger Abbey is neither a Western nor a vaccuum cleaner repair manual. And the 'disjunctive list' may be unmanageably long.