If we can only give an account of rule following in terms of intentionality, then we need to be able to associate a particular rule with an intentional state. Lets say that the 'rule of addition' is what someone is following when we agree with them that they are adding.
We can render a number as a rule by representing it as 'the number that ... ' etc. In order to be a proper example of a number we need to be able to do certain things with it - e.g. determine whether another (arbitrary) number is greater or smaller than it, determine the outcomes of using it in certain arithmetical contexts etc.
It seems likely that the paradoxical numbers pointed to by Cantor, Borel, Gödel and Chaitin aren't fully numerical in this sense. There are arithmetical operations we can't do on them. (Are they like 'incomplete concepts'?).
A number like this can't be fully represented in terms of intentionaly tractable rules, and if we can only avoid Kripkean chaos by rendering rules in terms of intentional states - attributable to interlocutors in a (necessarily playable) language game - then these numbers begin to soften in the mist, as the idea of an 'intentionaly intractable' rule begins to look analytically incoherent.
Maybe Gödel's paradoxes only arise when we assume that we can construct these rules. He has put together a set of rules that defines validity for the system within which they are (presumably) validly constructed. Maybe we can only imagine doing this if we can imagine a rule that is independent of an intentional state - that is prior to, rather than depending upon, the possibility of attributing such a state.
To be able to do computations based on Peano's axioms and the relevant logical transformations is to be able to unambiguously follow certain rules (i.e. is to be able to avoid Kripke's chaos). We cannot show that we have avoided ambiguity by applying some further rules which, mysteriously, avoid Kripke's problem ...
We have to be careful of specifying rules which we cannot fully state, and which cannot (therefore) be defined in terms of an attributable intentional state. We should be careful of saying 'there is some rule such that following it would have the consequence X'; and, of course, of saying 'there is some number such that ...', particularly when we know this rule, or this number cannot be stated. This is the case when a rule is meta-specified as the rule which must be followed in order to validly state rules (and to attribute rule following).
Sunday, November 28, 2010
Saturday, November 13, 2010
Language
Imagine a hierarchy:
(1) Mechanincal, natural.
At this level, we describe everything in terms of physical interactions. It is a world without semantic content.
There will appear to be only one way of describing the world mechanically, and we might think of this as 'reality'.
(2) Signalling
At this level, we divide the world into systems and sub-systems. These have formal characteristics which might be instantiated in a number of different physical ways. Each unit is attached to one or more other units by a 'channel' which has a 'band width'. The bandwidth is the number of different signals the channel can carry. Channels are asynchronous - they carry signals in one direction only. For synchronous signalling, we need more than one channel (even if both might be implemeted using the same physical substrate).
To be a channel, at least one possible signal must be 'useful'. By this, I mean something like: we can specifiy the conditions under which it would be sent, and what effect it has on its recipient.
The formal characteristics of the system as the rules that describe the behaviour of the units and the structure of the signals.
There will always be more than one way of dividing up level 1 in this way - it will be a matter of taste or convenience whether we describe some things in terms as internal structure of individual units or in terms of sets of interacting sub-units.
(3) Semantic
At this level, we give the signals and the behaviours 'meanings'. These are always provisional (Kripke). Perhaps we might say that the a meaning interpretation is a 'model' of the system, but perhaps this would be confusing.
In any case, this is the level at this we introduce intentional content. We can say 'Unit A has told Unit B that the temperature has reached 0 degrees', or 'Unit A has instructed Unit B to switch on the heater'.
(4) Validity attributing
A signal can be interpreted as an adjudication on the correctness or incorrectness of (a) a semantic interpretation or (b) its intentional content. Things get pretty wobbly at this stage. What is really going on here?
Particularly difficult issues arise if we interpret a signal as adjudicating on the correctness of our interpretation of it.
Is this a model people use? It looks like it to me. It can be partly 'unpacked' in some cases, and this contributes to the illusion.
But the whole thing is an 'interpretation' ...
(1) Mechanincal, natural.
At this level, we describe everything in terms of physical interactions. It is a world without semantic content.
There will appear to be only one way of describing the world mechanically, and we might think of this as 'reality'.
(2) Signalling
At this level, we divide the world into systems and sub-systems. These have formal characteristics which might be instantiated in a number of different physical ways. Each unit is attached to one or more other units by a 'channel' which has a 'band width'. The bandwidth is the number of different signals the channel can carry. Channels are asynchronous - they carry signals in one direction only. For synchronous signalling, we need more than one channel (even if both might be implemeted using the same physical substrate).
To be a channel, at least one possible signal must be 'useful'. By this, I mean something like: we can specifiy the conditions under which it would be sent, and what effect it has on its recipient.
The formal characteristics of the system as the rules that describe the behaviour of the units and the structure of the signals.
There will always be more than one way of dividing up level 1 in this way - it will be a matter of taste or convenience whether we describe some things in terms as internal structure of individual units or in terms of sets of interacting sub-units.
(3) Semantic
At this level, we give the signals and the behaviours 'meanings'. These are always provisional (Kripke). Perhaps we might say that the a meaning interpretation is a 'model' of the system, but perhaps this would be confusing.
In any case, this is the level at this we introduce intentional content. We can say 'Unit A has told Unit B that the temperature has reached 0 degrees', or 'Unit A has instructed Unit B to switch on the heater'.
(4) Validity attributing
A signal can be interpreted as an adjudication on the correctness or incorrectness of (a) a semantic interpretation or (b) its intentional content. Things get pretty wobbly at this stage. What is really going on here?
Particularly difficult issues arise if we interpret a signal as adjudicating on the correctness of our interpretation of it.
Is this a model people use? It looks like it to me. It can be partly 'unpacked' in some cases, and this contributes to the illusion.
But the whole thing is an 'interpretation' ...
Monday, November 08, 2010
Talking about semantic content
Does this make sense:
Giving semantic content is like giving a dictionary definition. If you already know how to speak the language, it gives you a way of extending your ability at the margins.
Its similarity to rendering formal syntax is misleading: of course we can write down some rules - this is not a useful model, though, because of the OQ arguments. We can only write down some rules if we can already write down something.
And dictionary definitions are only really useful if we also have some tacit 'hooks'. If we have only synonyms, then the only argument for replacing the synonym with the defined word will be convenience - we can say something more quickly, or in a less clumsy way.
We have to be able to show as well as tell.
We might, like the ordinary language philosophers and the linguists, articulate the way we talk. As we do this, we change the way we talk - words have their meanings 'tidied up', and so changed. Sometimes they are changed in more dramatic ways, that we take time to articualte (as scientific terms change meaning - space, mass, distance).
Discovering how we talk, how we can talk, and what the world is like are not three different enterprises.
What's wrong with the picture theory of language is what's wrong with semantic content, and also what's wrong the 'languages of thought'. This isn't to say that these models, these metaphors, have no empirical value - they may be important heuristics for certain kinds of studies. They just can't answer any interesting epistemological or ontological questions.
Giving semantic content is like giving a dictionary definition. If you already know how to speak the language, it gives you a way of extending your ability at the margins.
Its similarity to rendering formal syntax is misleading: of course we can write down some rules - this is not a useful model, though, because of the OQ arguments. We can only write down some rules if we can already write down something.
And dictionary definitions are only really useful if we also have some tacit 'hooks'. If we have only synonyms, then the only argument for replacing the synonym with the defined word will be convenience - we can say something more quickly, or in a less clumsy way.
We have to be able to show as well as tell.
We might, like the ordinary language philosophers and the linguists, articulate the way we talk. As we do this, we change the way we talk - words have their meanings 'tidied up', and so changed. Sometimes they are changed in more dramatic ways, that we take time to articualte (as scientific terms change meaning - space, mass, distance).
Discovering how we talk, how we can talk, and what the world is like are not three different enterprises.
What's wrong with the picture theory of language is what's wrong with semantic content, and also what's wrong the 'languages of thought'. This isn't to say that these models, these metaphors, have no empirical value - they may be important heuristics for certain kinds of studies. They just can't answer any interesting epistemological or ontological questions.
Sunday, November 07, 2010
Chomsky, Davidson, & learning ...
This may be a caricature.
Once I learn to ride a bicycle, I can ride it through many landscapes. Once I can sing in tune, I can sing many tunes. There may be rules about following maps or reading music, but not about riding bicycles or singing in tune.
We can analyse map-reading metalinguistically. We can generate isomorphs of the map, or of a musical score. These things require us to be able to talk about them, agree about them, in certain ways. We can't do the same for the 'methods' of singing in tune or bicycle riding, both of which are needed for music and bicycle navigation.
While there are some rules of grammar and syntax, and some rules of meaning there are also just some things that we find that we agree about in practice - that just don't give us enough trouble for us to reflect on them. When we try to reflect on them ('how does reference work?') we find ourselves in a Wittgensteinian predicament. Some aspects of these things seem too obvious to remark on and yet too obscure to explain.
A language may well need generative meaning rules, but this is not all it needs. There must also be things we do not yet know that we will agree about - things we might call 'discoveries' when we encounter them. Discovering that we agree about something may look like new knowledge about the world or about ourselves. In many cases, it may be a matter of taste which - it may come down to metaphysical prejudice.
We can do something like following rules that we can not write down - we know this from the open question arguments. We also know that we learn to speak, and that we can learn to do things without being able to articulate the rules which govern them.
Where does this leave Davidson's argument for a constructive account of meaning? At least we know that if we tried to pursue his programme, the meanings of things we said about meaning would change ...
Once I learn to ride a bicycle, I can ride it through many landscapes. Once I can sing in tune, I can sing many tunes. There may be rules about following maps or reading music, but not about riding bicycles or singing in tune.
We can analyse map-reading metalinguistically. We can generate isomorphs of the map, or of a musical score. These things require us to be able to talk about them, agree about them, in certain ways. We can't do the same for the 'methods' of singing in tune or bicycle riding, both of which are needed for music and bicycle navigation.
While there are some rules of grammar and syntax, and some rules of meaning there are also just some things that we find that we agree about in practice - that just don't give us enough trouble for us to reflect on them. When we try to reflect on them ('how does reference work?') we find ourselves in a Wittgensteinian predicament. Some aspects of these things seem too obvious to remark on and yet too obscure to explain.
A language may well need generative meaning rules, but this is not all it needs. There must also be things we do not yet know that we will agree about - things we might call 'discoveries' when we encounter them. Discovering that we agree about something may look like new knowledge about the world or about ourselves. In many cases, it may be a matter of taste which - it may come down to metaphysical prejudice.
We can do something like following rules that we can not write down - we know this from the open question arguments. We also know that we learn to speak, and that we can learn to do things without being able to articulate the rules which govern them.
Where does this leave Davidson's argument for a constructive account of meaning? At least we know that if we tried to pursue his programme, the meanings of things we said about meaning would change ...
Saturday, November 06, 2010
Constructivist meaning
Suppose we had a set of Davidsonian meaning rules. Could the sentences constructed using these rules be arbitrarily long, or complex? We could not set a limit based on cognitive capacity, because this cannot be represented within the rules. How could we write down a limit that wasn't arbitrary?
Would we otherwise be committed to saying, for some 'long' sentences, that they had a meaning but it was not practical to work out what it was?
We might say that a complex computer programme, represented in a high-level language, has a 'meaning' in this way. As soon as we begin to summarise or analyse the code - in terms of it's functions, or into different functional 'blocks' - we are hostage to interpretive ambiguities.
Is this also true about a large piece of language? A book, or this blog?
Does the whole book have a 'meaning' which could be analysed in terms of a Davidsonian theory?
Would we otherwise be committed to saying, for some 'long' sentences, that they had a meaning but it was not practical to work out what it was?
We might say that a complex computer programme, represented in a high-level language, has a 'meaning' in this way. As soon as we begin to summarise or analyse the code - in terms of it's functions, or into different functional 'blocks' - we are hostage to interpretive ambiguities.
Is this also true about a large piece of language? A book, or this blog?
Does the whole book have a 'meaning' which could be analysed in terms of a Davidsonian theory?
Wednesday, November 03, 2010
Interpretation
We can imagine a language without the tools to reflect on its own activity - one which functions, but which cannot be used to formulate validity rules or make intelligible attributions of truthfulness.
I think that there are commercial exchanges which take place in a languages like this. We shouldn't insist on interpreting them in terms their use of tokens we also use - they will certainly sound like nonsense if we do this. This is how we should undertstand Moore's paradox - as challenging the conventional meanings of the tokens it employs. Finding that a translation schema for an unknown language produced Moorean statements in critical contexts would invalidate the schema.
We can also imagine a language which does have the relevant capacities - a language in which it begins to be possible to do philosophy.
And we can imagine a language in which it is possible to speculate about how to interpret other languages - or other things as languages.
In our language, a comprehensive validation is one which shows that the falshood of a statement is inconsistent with the possibility of using the language. A comprehensive validation of an interpretation, then, would be a demonstration that the falsehood of the interpretation was inconsistent with the possibility using the language.
We know, from Kripke, that interpretations are always provisional. This doesn't mean that some specific interpretation cannot be ruled out, though. Can we imagine that someone who is quad/adding is actually playing chess? And if we cannot, is this a logical or a cognitive limitation? Is such an interpretation incoherent or just impossibly complex to apply?
Formally, the second seems more likely, but it isn't easy to work out what this means.
In any case, we don't need to worry about that - the interpretive judgements we are making are those that can be made intelligibly within the game we are presently playing.
Is there a problem with giving an account of what it is that we are interpreting, before we interpret it? Imagine a space with a dimension for every freedom of the human body, which could be used to describe any position that we might adopt. A function in this space could describe a movement. What would the function, or functions, associated with 'he waved goodbye' look like? How would we (a) distinguish them from others and (b) recognise them from the mathematics? This is like Wittgenstein on smiles which are only a millimetre too wide ...
We are good at finding the things in the world that we need to be able to recognise in order to be able to talk about it. We know this because we are able to talk about it. It is not a consequence of this that we must be able to say how this knowledge is acquired - contra Davidson. That something is possible is not the same as that it is explicable, or we would be able to explain the possibility of explanation, which is incoherent.
I think that there are commercial exchanges which take place in a languages like this. We shouldn't insist on interpreting them in terms their use of tokens we also use - they will certainly sound like nonsense if we do this. This is how we should undertstand Moore's paradox - as challenging the conventional meanings of the tokens it employs. Finding that a translation schema for an unknown language produced Moorean statements in critical contexts would invalidate the schema.
We can also imagine a language which does have the relevant capacities - a language in which it begins to be possible to do philosophy.
And we can imagine a language in which it is possible to speculate about how to interpret other languages - or other things as languages.
In our language, a comprehensive validation is one which shows that the falshood of a statement is inconsistent with the possibility of using the language. A comprehensive validation of an interpretation, then, would be a demonstration that the falsehood of the interpretation was inconsistent with the possibility using the language.
We know, from Kripke, that interpretations are always provisional. This doesn't mean that some specific interpretation cannot be ruled out, though. Can we imagine that someone who is quad/adding is actually playing chess? And if we cannot, is this a logical or a cognitive limitation? Is such an interpretation incoherent or just impossibly complex to apply?
Formally, the second seems more likely, but it isn't easy to work out what this means.
In any case, we don't need to worry about that - the interpretive judgements we are making are those that can be made intelligibly within the game we are presently playing.
Is there a problem with giving an account of what it is that we are interpreting, before we interpret it? Imagine a space with a dimension for every freedom of the human body, which could be used to describe any position that we might adopt. A function in this space could describe a movement. What would the function, or functions, associated with 'he waved goodbye' look like? How would we (a) distinguish them from others and (b) recognise them from the mathematics? This is like Wittgenstein on smiles which are only a millimetre too wide ...
We are good at finding the things in the world that we need to be able to recognise in order to be able to talk about it. We know this because we are able to talk about it. It is not a consequence of this that we must be able to say how this knowledge is acquired - contra Davidson. That something is possible is not the same as that it is explicable, or we would be able to explain the possibility of explanation, which is incoherent.
Subscribe to:
Comments (Atom)