Search This Blog

Monday, January 25, 2010

Another Summary

We need to take the 'language game' metaphor completely seriously - even to the point of thinking about linguistic exchanges in terms of game theory.  Except that the moves we are analysing are irreducibly semantic, and one of the objects of the game is to maintain the game - to maintain the semantic underpinnings of the possibility of describing any game, including formal mathematical games.

Tuesday, January 19, 2010

Necessary Paradoxes

We experiment with how to talk - we find out what works.  Sometimes we can write down rules from our experiments, that can be used to predict what will work and what will not.  Some of these rules will be provisional, and some will be derived (and so hypothetical on their derivations).  All these rules tell us what will be true.  Some (provisional and hypothetical) rules are just statements like 'P is true'.  Some of these rules seem to be hypothetical only on the possibility that there are any rules like this at all - the possibility of intelligibility, of language itself - but there will always be an interpretive (a semantic) step in any demonstration of this.  To those who accept this step - we might call them those to whom we can speak - the rules which can be demonstrated in this way will seem to be absolutely true.

We need 'is true' to talk about all these kinds of rules.  We cannot ('in principle') write down all the rules - we never reach the end of experiment.

'Semantically closed' languages must be 'empirically' open.  It is the paradoxes that give us science.

Tuesday, January 05, 2010

Prediction, Theories of Truth

I was trying to explain this argument to someone a few days ago, when we both suddenly saw the problem with it:

In outline, it is the argument that a comprehensive 'predicting machine' would have to incorporate a theory of truth for the language of any people whose behaviour it was going to predict, and so it's workings would be incomprehensible - unintelligible - to them.  It would have to be a 'black box'.

The problem is this:

The machine could predict behaviour - physical movements etc. - without intentional content.  In fact, if it was a machine which conformed to the traditional deterministic metaphors, it would be restricted to doing only this.  Someone examining the output of the machine might interpret it as having content - 'I know what these sounds mean', or 'I know what is described here - it is a person who is thinking this'.  But this interpretation would be made now - in the conversation of the interpreter.  It would not be a necessary component of the raw output.

If the output were to have semantic content, the predictor could not be just a mechanical device.  A semantic predictor would need to produce interpreted output, in the language of the user.  It is this kind of predictor - an 'oracular' predictor - which would need to incorporate a theory of truth for the language of its subjects, but a predictor of this kind could not be rendered in rule based terms - because it could not be a 'machine'.  It would have to be capable of normative judgements.  A prediction such as 'Mary will fall in love with John, and will tell him' could be fulfilled in a number of physically quite distinct ways.  The oracular predictor would need to know what our interpretive decisions would be in order to reliably make such a statement.

A computer which renders its outputs in a natural language can look like an oracular predictor, but examining its internal processes partly undermines this illusion.  We would see how it 'appeared' to speak, and what the nature of our 'conversation' with it was.  Whether, in some future, people might continue to attribute interlocutor status to objects such as this will be a matter for them.

What may bring them closer to this attribution would be a 'mechanical' understanding of how biological human beings speak - and this is some way away.  It is likely that the models will be so complex that few will understand any part of them, and no-one will understand them fully.

In this kind of world the metaphor of universal determinism will seem absurd - it will seem to be based on an astonishingly naïve conception of 'science' as something that could be grasped or managed by individual human beings.  The world will seem to be made up of 'black boxes' ... maybe not such an unfamiliar world after all.

Semanitc Rules

A minimum condition for a linguistic move to be meaningful is that it must commit it's user to some other moves.

A 'use' theory of meaningfulness, and maybe of meaning, could be constructed in terms of 'meaning committments' - i.e. the further moves that must be accepted as 'legitimate' in order for a move under consideration to be meaningful.  It may be that an explication of the further moves is also an explication of the meaning, and the fact that this could never be exhaustive wouldn't matter because only some further moves would be relevant to any particular context.

There would be irreducibly semantic aspects of the further moves - meanings can only depend on other meanings.

When we describe a a 'move' in this game, the description will always refer to some semantic elements.  So, therefore, will the rules of the game - which must rule some moves in and others out.  (The rules of logic are not exempt here - see Logic and Interpretation).

In any case, the apparent clarity of any rule depends upon it's being embedded in an 'unproblematic' semantic framework.

Where does this go?