Sunday, December 05, 2010
If we wonder how we can talk, do we also wonder whether ... ?
I must be wondering in some other way - some way which requires interpretation. I must be in a potentially ambigous intentional state, whereas if I say that I wonder what day it is I cannot - by accident, for instance - be wondering something else.
If I say 'I wonder whether it is possible to talk?' you might be puzzled: I can't be believe we can't talk without rendering my statement uninterpretable. If my statment means anything, then I can't be seriously wondering.
Sunday, November 28, 2010
Some other rules we can't follow ...
We can render a number as a rule by representing it as 'the number that ... ' etc. In order to be a proper example of a number we need to be able to do certain things with it - e.g. determine whether another (arbitrary) number is greater or smaller than it, determine the outcomes of using it in certain arithmetical contexts etc.
It seems likely that the paradoxical numbers pointed to by Cantor, Borel, Gödel and Chaitin aren't fully numerical in this sense. There are arithmetical operations we can't do on them. (Are they like 'incomplete concepts'?).
A number like this can't be fully represented in terms of intentionaly tractable rules, and if we can only avoid Kripkean chaos by rendering rules in terms of intentional states - attributable to interlocutors in a (necessarily playable) language game - then these numbers begin to soften in the mist, as the idea of an 'intentionaly intractable' rule begins to look analytically incoherent.
Maybe Gödel's paradoxes only arise when we assume that we can construct these rules. He has put together a set of rules that defines validity for the system within which they are (presumably) validly constructed. Maybe we can only imagine doing this if we can imagine a rule that is independent of an intentional state - that is prior to, rather than depending upon, the possibility of attributing such a state.
To be able to do computations based on Peano's axioms and the relevant logical transformations is to be able to unambiguously follow certain rules (i.e. is to be able to avoid Kripke's chaos). We cannot show that we have avoided ambiguity by applying some further rules which, mysteriously, avoid Kripke's problem ...
We have to be careful of specifying rules which we cannot fully state, and which cannot (therefore) be defined in terms of an attributable intentional state. We should be careful of saying 'there is some rule such that following it would have the consequence X'; and, of course, of saying 'there is some number such that ...', particularly when we know this rule, or this number cannot be stated. This is the case when a rule is meta-specified as the rule which must be followed in order to validly state rules (and to attribute rule following).
Saturday, November 13, 2010
Language
(1) Mechanincal, natural.
At this level, we describe everything in terms of physical interactions. It is a world without semantic content.
There will appear to be only one way of describing the world mechanically, and we might think of this as 'reality'.
(2) Signalling
At this level, we divide the world into systems and sub-systems. These have formal characteristics which might be instantiated in a number of different physical ways. Each unit is attached to one or more other units by a 'channel' which has a 'band width'. The bandwidth is the number of different signals the channel can carry. Channels are asynchronous - they carry signals in one direction only. For synchronous signalling, we need more than one channel (even if both might be implemeted using the same physical substrate).
To be a channel, at least one possible signal must be 'useful'. By this, I mean something like: we can specifiy the conditions under which it would be sent, and what effect it has on its recipient.
The formal characteristics of the system as the rules that describe the behaviour of the units and the structure of the signals.
There will always be more than one way of dividing up level 1 in this way - it will be a matter of taste or convenience whether we describe some things in terms as internal structure of individual units or in terms of sets of interacting sub-units.
(3) Semantic
At this level, we give the signals and the behaviours 'meanings'. These are always provisional (Kripke). Perhaps we might say that the a meaning interpretation is a 'model' of the system, but perhaps this would be confusing.
In any case, this is the level at this we introduce intentional content. We can say 'Unit A has told Unit B that the temperature has reached 0 degrees', or 'Unit A has instructed Unit B to switch on the heater'.
(4) Validity attributing
A signal can be interpreted as an adjudication on the correctness or incorrectness of (a) a semantic interpretation or (b) its intentional content. Things get pretty wobbly at this stage. What is really going on here?
Particularly difficult issues arise if we interpret a signal as adjudicating on the correctness of our interpretation of it.
Is this a model people use? It looks like it to me. It can be partly 'unpacked' in some cases, and this contributes to the illusion.
But the whole thing is an 'interpretation' ...
Monday, November 08, 2010
Talking about semantic content
Giving semantic content is like giving a dictionary definition. If you already know how to speak the language, it gives you a way of extending your ability at the margins.
Its similarity to rendering formal syntax is misleading: of course we can write down some rules - this is not a useful model, though, because of the OQ arguments. We can only write down some rules if we can already write down something.
And dictionary definitions are only really useful if we also have some tacit 'hooks'. If we have only synonyms, then the only argument for replacing the synonym with the defined word will be convenience - we can say something more quickly, or in a less clumsy way.
We have to be able to show as well as tell.
We might, like the ordinary language philosophers and the linguists, articulate the way we talk. As we do this, we change the way we talk - words have their meanings 'tidied up', and so changed. Sometimes they are changed in more dramatic ways, that we take time to articualte (as scientific terms change meaning - space, mass, distance).
Discovering how we talk, how we can talk, and what the world is like are not three different enterprises.
What's wrong with the picture theory of language is what's wrong with semantic content, and also what's wrong the 'languages of thought'. This isn't to say that these models, these metaphors, have no empirical value - they may be important heuristics for certain kinds of studies. They just can't answer any interesting epistemological or ontological questions.
Sunday, November 07, 2010
Chomsky, Davidson, & learning ...
Once I learn to ride a bicycle, I can ride it through many landscapes. Once I can sing in tune, I can sing many tunes. There may be rules about following maps or reading music, but not about riding bicycles or singing in tune.
We can analyse map-reading metalinguistically. We can generate isomorphs of the map, or of a musical score. These things require us to be able to talk about them, agree about them, in certain ways. We can't do the same for the 'methods' of singing in tune or bicycle riding, both of which are needed for music and bicycle navigation.
While there are some rules of grammar and syntax, and some rules of meaning there are also just some things that we find that we agree about in practice - that just don't give us enough trouble for us to reflect on them. When we try to reflect on them ('how does reference work?') we find ourselves in a Wittgensteinian predicament. Some aspects of these things seem too obvious to remark on and yet too obscure to explain.
A language may well need generative meaning rules, but this is not all it needs. There must also be things we do not yet know that we will agree about - things we might call 'discoveries' when we encounter them. Discovering that we agree about something may look like new knowledge about the world or about ourselves. In many cases, it may be a matter of taste which - it may come down to metaphysical prejudice.
We can do something like following rules that we can not write down - we know this from the open question arguments. We also know that we learn to speak, and that we can learn to do things without being able to articulate the rules which govern them.
Where does this leave Davidson's argument for a constructive account of meaning? At least we know that if we tried to pursue his programme, the meanings of things we said about meaning would change ...
Saturday, November 06, 2010
Constructivist meaning
Would we otherwise be committed to saying, for some 'long' sentences, that they had a meaning but it was not practical to work out what it was?
We might say that a complex computer programme, represented in a high-level language, has a 'meaning' in this way. As soon as we begin to summarise or analyse the code - in terms of it's functions, or into different functional 'blocks' - we are hostage to interpretive ambiguities.
Is this also true about a large piece of language? A book, or this blog?
Does the whole book have a 'meaning' which could be analysed in terms of a Davidsonian theory?
Wednesday, November 03, 2010
Interpretation
I think that there are commercial exchanges which take place in a languages like this. We shouldn't insist on interpreting them in terms their use of tokens we also use - they will certainly sound like nonsense if we do this. This is how we should undertstand Moore's paradox - as challenging the conventional meanings of the tokens it employs. Finding that a translation schema for an unknown language produced Moorean statements in critical contexts would invalidate the schema.
We can also imagine a language which does have the relevant capacities - a language in which it begins to be possible to do philosophy.
And we can imagine a language in which it is possible to speculate about how to interpret other languages - or other things as languages.
In our language, a comprehensive validation is one which shows that the falshood of a statement is inconsistent with the possibility of using the language. A comprehensive validation of an interpretation, then, would be a demonstration that the falsehood of the interpretation was inconsistent with the possibility using the language.
We know, from Kripke, that interpretations are always provisional. This doesn't mean that some specific interpretation cannot be ruled out, though. Can we imagine that someone who is quad/adding is actually playing chess? And if we cannot, is this a logical or a cognitive limitation? Is such an interpretation incoherent or just impossibly complex to apply?
Formally, the second seems more likely, but it isn't easy to work out what this means.
In any case, we don't need to worry about that - the interpretive judgements we are making are those that can be made intelligibly within the game we are presently playing.
Is there a problem with giving an account of what it is that we are interpreting, before we interpret it? Imagine a space with a dimension for every freedom of the human body, which could be used to describe any position that we might adopt. A function in this space could describe a movement. What would the function, or functions, associated with 'he waved goodbye' look like? How would we (a) distinguish them from others and (b) recognise them from the mathematics? This is like Wittgenstein on smiles which are only a millimetre too wide ...
We are good at finding the things in the world that we need to be able to recognise in order to be able to talk about it. We know this because we are able to talk about it. It is not a consequence of this that we must be able to say how this knowledge is acquired - contra Davidson. That something is possible is not the same as that it is explicable, or we would be able to explain the possibility of explanation, which is incoherent.
Friday, October 29, 2010
Models
Thursday, October 28, 2010
Talking about talking
(We can't, of course, translate - or, therefore, imagine - a language which did have a truth predicate, but whose truth predicate worked in a significantly different way from 'ours'.)
Some language-like games might not be able to have a truth predicate. The introduction of a truth predicate (which would have to, again, be isomorphic with 'our' truth predicate) would render them unintelligible. Bargaining games are like this - when a salesman swears that everything in the brochure is true, we know he cannot be using 'is true' in the same way as, say, a logician or a natural scientist. On the other hand, the bargaining game would be unplayable if it was complemented by a logical or natural-science-style truth predicate, because this game depends upon a certain amount of conventional dishonesty. The extent of this dishonesty is explored by the participants in the game, but not explicitly explored - it is explored practically, by finding out what moves 'work' and what moves don't. Salesman do not make good philosophers, but this does not render the bargaining game intrinsically corrupt.
What would render it corrupt would be dishonest practice on the part of any players. I'm going to give a definition of this here which I'm not sure I can fully support, but which I think is roughly right:
Dishonest practice is practice which exploits the expectations of other players in a way which sacrifices the playability of the game to the instrumental aims of the practitioner.
Since the bargaining game cannot include a truth predicate, and since it is hard to construct a normatively self-reflective game without a truth predicate, it is possible that the bargaining game cannot be used to explictly adjudiate on honest practice. To test any practice against the definition, we would need to make a judgement about the objects and consequences of moves within the game - and particualrly about whethe a move rendered the game unplayable. This last, I think, would definitely require self-consious reflection on the possibility of truth-telling.
This raises a tricky moral issue: Can we, from outside, using a language with the capacity to articulate truth related issues, adjudicate on the language of the bargainers, which does not? We would be presuming to be able to translate this language, while knowing that we could not discuss the quality of this translation with the bargainers - we could not ask them whether the translation was correct.
We could, of course, learn to play the game. If we had an appropriate facility with it, we could then reflect on this facility using our native meta-language. This looks tricky: the character of our engagement would be different just because we had access to the meta-language. A physicist discussing mass with a weights and measures inspector does not participate in the weights-and-measures game in the same way as another inspector would.
This may be harmless. We certainly wouldn't want to say that the physicist and the weights and measures inspector didn't understand one another - we wouldn't want to say that they weren't able to play this game together. We would, however, find that their game became more difficult if it developed in a certain self-reflective way. There would be some truths about mass that they could only share by changing the game.
Wednesday, October 27, 2010
Natural Laws
What kind of law can this law be?
We can only talk about a predictable nature - anything else would be incomprehensible. Natural laws are the grammar of this talk.
Sunday, October 24, 2010
Conviction, Certainty and Argument
We might argue that, over time, evolution would tend to weed out ignorance and misconception. There isn't any evidence for this beyond our own, present, existence. This existence, though, is the outcome of just one of many rolls of the evolutionary dice; and we know already that it is very probably temporary. Attributing intentional states to past 'evolutionary successes' in support of a phenomenological evolutionary epistemology would be circular and gratuitous.
It is absurd to say that our whole language game is reliable because it has 'survived' an evolutionary process which can only be given an intelligible account of if the game is reliable ...
Saturday, October 16, 2010
Martian intentionality
Friday, October 15, 2010
Goodman, chess machines, and physical standards
Chess playing computers
This is also true of people: we might believe that people act rationally, but we cannot define rationality in terms of the behaviour of any individual or group - however apparently exemplary.
This extends to their internal 'behaviour' - no neurological account of brain processes can capture the normative aspects of rationality, just as no actual 'or' gate could count as the effective standard for the logical operator.
Under certain circumstances, we might have to choose between deciding that the standard meter had grown or that our prior measurements all needed to be proportionally corrected. If we standardise the meaning of 'or' on the behaviour of an 'or' gate, the meaning of 'chess' on the behaviour of a computer, or the meaning of 'rationality' on either a neurological process or the behaviour of an individual or group, we do not know what adjustment this might lead us to have to make - we do not know what changes in meaning we might have to accomodate.
The standard metre can only change in one way, resulting in a uniform scalar adjustement to length. The standard 'or' gate could change the meaning of 'or' in a way which rendered the world unintelligible.
The standard chess computer might redefine chess in terms of any option available to it - whether or not it was functioning correctly.
Monday, October 11, 2010
Rules and Laws
Language requires attribution of intentional states, and is 'a priori' possible or we wouldn't be able to say so.
Certain irreducible intentional states - 'thoughts' (Frege?) - cannot be expressed unless it is possible to attribute generic properties. Predication requires universals.
To know how to talk is, among other things, to know how to attribute these generic properties.
Also - 'X knows how to talk' attributes some specific generic properties, including the ability to follow rules, to X.
To be able to attribute universal properties is part of being able to talk, it is part of being able to follow the rules of talk.
Asking whether the world 'really' contains universals (including whether it 'really' follows general laws); or asking whether certain people 'really' follow rules is like asking whether it is possible to talk about the world, or whether these people do, in fact, talk. It's a Moorean error - a question which can only have one answer if it is to make any sense at all.
To talk about a world is to talk about a world of this kind.
Wednesday, September 15, 2010
Rules and Intentionality
I think it would be a mistake to take this (as Millikan and Wright do) as a cue for producing a better account of intentionality, if by 'account' we mean 'reductive account'. For reasons that I've probably laboured elsewhere in this blog (more open questions ...), I think any reductive account is almost certain to generate a version of Kripke's paradox.
An account of intentionality in terms of physical behaviour (e.g. of a machine) is obviously ruled out by this kind of objection, since questions about the reliability of the machine would always make sense - and these questions would be questions about whether the machine appropriately fulfilled its function; whether it appropriately 'modelled' the intentional state it was meant to underpin.
It is difficult to think of how a reductive account could be given that could avoid this machine model objection.
In concert with accounts of truth and meaning (to which intentionality is obviously related), a recursive account can, however, be given. Since we have to be able to attribute intentional states in order to be able to speak, and since we can speak, we can start with the intentional states we must be able to attribute in order to be able to have this conversation (we don't need a complete list - just an articulable selection).
Following from this, we can give an account of rule following in terms of these intentional states, or from others that depend upon them. To follow a rule is to act in accordance with an intentional attribution.
Friday, September 03, 2010
Intentionality and rules
Who will adjudicate? I and my interlocutors, playing the shared language game within which the statement of my intentional state is held to be true. The intelligibility, the playability, of these adjudications will be one part of what makes the whole game possible; and if the become idiosyncratically unintelligible then so does the original statement of my intentional state - we will find that we did not know, after all, what we meant by attributing this state to me.
This process of adjudication and exploration is what the history of mathematics reveals to us about addition. Someone who has been quadding so far, but thinks they were adding, is not a competent participant in this conversation.
Saturday, August 28, 2010
Language Machines
I can say of someone 'he believes it is impossible to talk', but I cannot say to you 'you believe it is impossible to talk'. And 'I believe it is impossible to talk' directly undermines its own meaning, so must be false if it means anything. My attribution of this thought to someone is always corrigible, however.
We cannot attribute thoughts without attributing rule-following. (I think this is what Frege believed). And there are other reasons why rule following and intentionality go hand in hand. One is (again) to do with Kripke's paradox - we cannot define rule following in terms of behavioural descriptions which do not, themselves, refer to rules. This can be extended: we cannot define a rule in terms of the behaviour of any actual mechanical system. We cannot define 'or' in terms of the behaviour of some specific 'or' gate (a physical logical standard equivalent to the standard metre) without having some guarantee that the gate would never fail - and this guarantee obviously couldn't depend on the behaviour of some further physical system. What would count as failure is an irreducibly normative, and not simply a mechanical, issue.
To someone of a certain physicalist bent, this will sound wrong: after all, are we not, ourselves, physical sytems? How can we speak to one another if our speaking to one another requires the attribution of intentionality and rule following, while agreeing that rule following cannot be rendered physically?
This would be a reasonable question if it could be posed from outside our linguistic system (independently of all questioning and answering), but no quesiton can be posed from that standpoint. The most abstract, self-referential questions are still moves within the game. And if we cannot play the game without following rules then we must be able to follow rules - regardless of the clash with physicalist intuitions.
A more reductive, rather than reductio, response to the physicalist is to point out that physicalism can only be given substance by attributing rule based behaviour to 'nature', and this cannot be an outcome of grounded discovery. When we state a physical law we say that some things 'always' or 'everywhere' (given appropriate context) behave in some way (and, of course, the scope of any hypothetical law is itself, stated unconditionally: 'this only happens here' is not local even if 'this' is). Counterexamples, infamously, can be dealt with semantically as well as hypothetically. We discover and define as we go along - we find out a property, and then use it as a test (as in the boiling point of a liquid). We render the world intelligible by showing how it can be described - by discovering how we can talk about it, and, therefore, what thoughts about it we can attribute to interlocutors. (I might, as usual, add 'honest and competent' to 'interlocutor' except that these adjectives also directly determine the applicability of the descriptor - what would a dishonest and incompetent interlocutor look like?)
In other words, we can only construct the physicalist metaphor if we can talk to one another, and we can only do it by rendering the world in a way which appears, already, to contain semantic elements. This does not make it a bad metaphor, but it does make it an irreducible one - it can only ever be a heuristic, not an epistemological or metaphysical fundamental.
'We can talk about the world' is a fact about the world, but it cannot be given an 'account' of in terms of some other 'facts' that do not, already, depend upon it.
It is clear, in one sense, from all of this, that we can think of things that we cannot make. We cannot make a machine sufficiently reliable to be a standard for 'or', for instance; though we can know how 'or' works and, therefore, how such a machine should work if it could be constructed. Our best machines of this kind (the ones we can almost completely rely on) depend on aspects of our world that are most semantically secure. We would hardly know what physics was if we couldn't rely on the mechanisms from which our computers are built - and by this, I mean that we would regard someone who questioned these mechanisms as asking unintelligible questions. To have them break down would be like having an intelligent friend suddenly begin to talk nonsense. We could have no conversation with this friend within which we could explore the nature of the nonsense. To find some radical error in physics would be like finding that everyone had been talking nonsense - a discovery that could not be articulated, because there would be no language to articulate it with; a discovery that could not be scientifically demonstrated because the tools of demonstration themselves could not be relied on ...
In a way, our science says: 'If the world were a perfect machine which followed these (specified) rules, then it would behave in this way'. But the rules of this machine are explicit rules - rules which can be articulated. We cannot appeal, for their 'reliability', to some more metaphyscially fundamental machine, because the rules specifying this machine could not be written down in any language we could translate (and so in anything that we could recognise as a language at all). We can't make sense of the question 'what rules must I be following, in order to be able to follow a rule?', nor of the question 'what rules does the world follow so that we are able to securely describe it as following rules?'
Thursday, August 12, 2010
Semantic games
The 'pieces' in Wittgenstein's game - the words, sentences etc. - are caricatured as 'tokens', whose 'meaning' is given by their role in the game. This is fine, so long as we realise that this is a metaphor, and that it can' t be pushed too far.
If we're playing chess, it's easy to see how we might swap certain pieces - use the piece normally used as a knight as a bishop, and vice versa - and still be playing the same game (or one which is only trivially different). This is because we can usually identify the pieces separately from their role in the game. Someone completely unfamiliar with the rules of chess could learn the correct names for the pieces and be able to use them in our conversation (although in a specifically limited way). They could pick the right pieces from the box in response to appropriately worded requests, or they could make a list of 'all the chess pieces', or they could weigh them and come up with the same results as someone who could play chess etc.
This is not the case with language in general, even if it seems to be the case for some restricted games like chess (but see below).
We might think of a parenthesised or quoted word as a token - "word" or "knight" - for instance, and imagine a certain way of identifying it that doesn't take account of its 'meaning' (role in the game). This is necessary, for example, if we are to programme a computer to process strings which encode linguistic moves. My computer does not 'understand' what I am typing, but it must be able to reliably distinguish some strings of characters from others in order to function as an appropriate channel for our communication. The routines developed to achieve this can also be enhanced to do spell checking and even some translation.
Notoriously, we can also, to the point of practical incoherence, swap these pieces between roles without altering the 'meaning' of what we are saying.
What we cannot do is give an account of language use completely in terms of tokens and rules - even if we enhance this ('syntactical') account with a mechanical account of how certain tokens are related (in a functional, non-ambiguous way) to certain features of 'reality'. This is because (OQA) this account itself depends upon the correct 'intepretation' of the tokens which are used to encode it.
While we might demonstrate that some games (e.g. versions of chess in which token allocations are different) are 'isomorphic' in some important way, we can only give an account of this isomorphism, itself, within a playable language game. The syntactic/mechanical story says, roughly, that our whole language (and all intertranslatable with it) is isomorphic with some general set of syntactical rules and semantic/mechanical allocations and that it can be used to state these rules and define these allocations.
The 'game' is not like this - it is irreducibly semantic. If the rules of chess only defined playable moves in terms of the names of the pieces (without descriptions), the actual bits of hardware could look like anything at all, so long as they could be matched to the names - and the 'meanings' of the names would be given entirely in terms of the rules of play. In this circumstance, a non-player could not name the actual pieces - pieces in a box, and not on a board (in play), would not even have names, because the could be used to fill whatever 'logical' role we liked. And if a possible move in chess was to (re-)define how a piece could move ...
Irony and Gettier
These depend upon an 'irony' in the sense that the audience for the counter-examples knows something that the subject in the counter-examples does not. Irony cannot survive audience participation, however - as demonstrated in pantomime.
The justification in a Gettier example cannot work in the context of a shared conversation between the subject and the audience - if I know that Farmer Franco has mistaken a large piece of black and white cardboard for his cow Daisy, then I cannot accept his justification that he has seen Daisy and so knows that Daisy is in the field. I can only accept that he believes it, and that his belief is true.
In any ('honest and competent') conversation with Franco, either what he saw, or what I saw, or (perhaps) what counts as justification, would have to be in play.
This is an exact parallel with the solution to Kripke's paradox, because it depends upon attribution of intentional states within a shared conversation.
Short Cruise
Orkney 2010
Everything worked well, but still looks pretty dreadful. A sailor from a nice Norwegian yacht in Wick suggested paint ... with the best intentions.
Monday, July 19, 2010
Rational attributions of intention
The possibility of the evidential narrative, itself, depends (of course) on some attribution of intention (to its narrator and to its audience). Maybe we can set this aside for the moment - allowing that some agreed behavioural/evidential narrative is possible, without enquring too much.
From this narrative, we can rule out certain possibilities (e.g. subtraction). Do we do this on the basis of 'facts', or only on the basis of some buried intentional attribution in the narrative?
If I say 'he wrote down this: "2+2=4"', I don't see how I can discard the meanings (e.g. by giving some description of the lines forming hte characters) without also losing the ability to rule out subtraction.
It might be the case that we can't describe behaviour without introducing (and so ruling out) some intentional content.
Although: a computer programme can look exactly like this. It may describe, in a complex way, how pixel elements on a screen can be made to change their character without saying 'this is how to display the letter "F"'. A slight peculiarity here is that the non-intentional explanation of what is happening is frighteningly complex - more complex than a human being could understand, if 'understand' means recognising what the programme is doing (printing "F").
In real programmes, we find a hierarchy of machines. At one very low level, there are machines for handling the pixels and changing their status. At a higher level, there are machines for assembling instructions to these low level machines which 'draw lines', or 'print characters'. Other machines help with re-directing these instructions so that we get 'print the character held in x at the present cursor position' - at which stage we are clearly dealing with instructions which have comforting intentional content. We may decide ourselves whether the linguistic structure here is just a convenient mnemonic, or whether we are succeeding in producing intentional behaviour on the part of the machine. This is a normative issue.
We can make a machine which 'appears' to talk to us, and we think we've achieved some insight into 'how talk works', but all we have done is make a machine behave in a way which invites an intentional interpretation by, among other things, appearing to talk and, particularly, appearing to talk in a way which rules out certain specific intentional interpretations.
Sunday, July 18, 2010
Machine people
I suppose, at least, that we shouldn't worry about them so far as 'practical' (machine manageable) arithmetic is concerned.
This machine world doesn't render up rules. It can't define them.
But a 'describable' or 'completely narrated' machine world's rules would be contained in its narration - we could only desribe it in this abstract way. Even a single determinate fact has a hidden rule - that a description of it is always true. If we can't narrate a world without some rules being true, then these rules might as well be in the world we are narrating.
But 'might as well be' ... this is metaphysics. It is 'as if' the world contains these rules. We can talk 'as if' the world contains these rules. Even: we could not talk except to talk as if the world contained these rules, except here the 'cannot' is a product of an argument within our talk, and is perhaps circular.
Wednesday, July 07, 2010
Talking machines (revisited)
It couldn't be the coding for all human conversation to date, however, because it wouldn't mean anything if it did.
If we say 'the whole of human conversation to date is a string produced by a machine', we would have to wonder what language we were saying this in. If we are including it within its own scope - as part of human conversation to date - then either (a) it is false or (b) it doesn't mean anything because it's just a string of code. If we are not including it within its own scope, we might wonder how it, uniquely, could mean something if everything that looked as though it has meant something up to that point was just a string of code. A language game cannot comprise a single sentence standing on its own.
Should we think of ourselves as being the ('deluded') components of some greater machine which we cannot 'comprehend'? This could only ever be a metaphor - we could not describe the machine we were 'part of' well enough to call it a machine. Those elements of its workings we did not understand we could not distinguish from random, or at least radically unpredictable. And there would always be elements we did not understand because we are part of the machine, and we can't model ourselves. The machine cannot produce a string which encodes a complete description of itself (including its capacity to produce this string).
Who would this description be for? What is the language within which this 'representation' would work? (i.e. would be a legitimate move...)
A machine can't have a private language either.
The halting problem is also in here somewhere ...
Suppose that we imagine the machine to be following some rule - we see that the strings it produces, when interpreted in a certain way, do not breach the no contradictions rule. This wouldn't allow us to hand our adjudications on the rule over to the machine - it might not always behave correctly; it might break down. The machine can follow the rule, but it can't define it - and neither can any other machine.
(Maybe this is 'why' the halting problem arises ...)
What do we do when we say 'this is right' or 'this is wrong'? We state a rule that we are going to follow - we specify a part of the theory of truth we are using. Our justifications of these statements are always 'incomplete', in the sense that they cannot look outside our language, in which they are expressed. Our ability to agree empirical hinges is only different from our ability to agree logical or mathematical hinges in so far as it is more 'mysterious' to us (epistemologically). They are all rules which, if broken, render the conversation impossible - including, in extremis, any conversation in which we might be able to say why this conversation had become impossible.
Saturday, July 03, 2010
Perfect Machines
This is just the 'naturalistic fallacy', slightly disguised ...
Another open question argument? But not quite as boring as that.
If I am a machine then 'or' cannot be defined in terms of anything I do either. What sensible conclusions can we draw from this?
There is the 'private language' issue - we know that 'or' can't be defined in terms of something internal - something I 'think'. But isn't a collection of machines also a machine?
This looks unavoidable, but what is the function of this 'greater' machine? We don't know. What would a 'function' of this kind look like? Function for what? To whose purpose?
We might say: a functionless collection of facts may still have some order - it may do something, but not something useful. What order would it have? Only the order of some facts. An ordered groups of facts is just another fact. We might make a mistake about the order, but this would not render the group of facts 'malfunctioning'. How would we discover this order? Just as we discover the ordering of facts which allows us to build our machines. The Universe doesn't get things 'right' or 'wrong'. The great collection of machines can't get things 'right' or 'wrong' either. It is just the way it is.
But we get things right and wrong. If we didn't, we couldn't talk to one another. Another dull chant?
Thursday, July 01, 2010
Meta-epistemology
The contrary of an epistemological theory needs to be unintelligible tout court.
If it isn't, then the theory is hostage to any hypothesis on which its contrary's unintelligibility depends. More open questions ...
Saturday, June 05, 2010
Tacit Rules
This metaphor is misleading, though. Is it a tacit rule of chess that the normal rules stay the same at least until next Wednesday? Or next Thursday? We can ask an indefinte number of questions to which 'rules' like this might be answers.
The fact that 'normal' interlocutors know most of the answers to these questions (when asked) is like the fact that they give commensurate answers to questions about their immediate perceptions. They just know how to talk - or so we would say in any shared game with them.
Tuesday, June 01, 2010
Rationality
We are sometimes confused by the fact that false statements seem to be intelligible. What is not intelligible, however, is to agree that a statement is false but to 'assert' it - to play it as though it were a legitimate move in the conversation.
Exploring rationality is, however, exploring this kind of intelligibility. A theory of rationality asks what we can intelligibly say about intelligibility. This threatens to generate open question problems. We can avoid this by giving a recursive account - by pointing to examples of intelligible moves and showing how the intelligiblity of others can be illustrated or demonstrated from these.
It is different from other kinds of account of alethial or epistemological processes, though - for all of these, we can say (of any metaphysical, physical, or formal theory 'X') either 'X is true' or 'We can talk as though X is true'. These are more or less equivalent, and the second is not inconsistent with 'We can talk as though X is false'. Where X includes an essential element in any account of intelligibility, however, we cannot talk as though X is false. Also 'We can talk as though we can talk' in tautologous, and is not a simile.
We think that we have two things - rationality, and how to talk. This is partly because we believe we have a pre-linguistic, or sub-linguistic rational process that we have phenomenological access to. However, we can only bring these processes into the game by giving an intelligible account of them - by showing that they can be represented as rational processes. In this context, sensory phenomena are no different from any others.
To be able to speak is to be able to speak intelligibly - to be able to tell the truth (whether or not we exercise this ability). It is always unintelligible to 'seriously' claim otherwise. It is not possible to know what such a claim might mean, since to take it seriously is also to render it unintelligible.
The world, one way or another, must be the kind of place in which intelligible conversations can take place. While we might imagine some other world, we cannot imagine living there.
To say that the world is like this is to make a fairly complex empirical claim, since the possibility of intelligibility implies the possibility that a fairly complex game can be played - the one we actually play when we talk, with all its certainties and uncertainties, clarities and ambiguities, concrete claims and abstract structures. It also must allow - perhaps require - that we play this game 'from the inside'. We must respect the constraints imposed by open question considerations, and much of what we do when we explore what can intelligibly be said will feel like empirical discovery.
Wednesday, April 28, 2010
Language and thought
In the case of behavioural attribution, we always have Kripkean ambiguities. If someone explicitly claims to believe something, we can only doubt whether they do by doubting whether they are an honest and competent interlocutor. This doubt, if it is radical, cannot intelligibly be expressed in any conversation with them, since it is exactly whether we can have a conversation with them that we doubt.
A belief attribution can be incoherent for different reasons. The belief might be incredible, in the context of the behavioural evidence. It might be incoherent in the conversational context - e.g. a self-attribution of the kind instantiated by Moore's paradox: If someome persistently appears to claim to have a belief which we cannot intelligibly attribute to them, our conversation with them breaks down - we no longer know what they are talking about.
I think there are beliefs that cannot be attributed to anyone because to have the mental equipment to have the belief would also make the belief incoherent. A belief that it isn't possible to talk might be like this. Someone who had this belief would have to (a) understand what people thought they were doing when they spoke to one another and (b) believe that they were failing.
It would only be possible to meet criterion (a) if one knew how to talk. We might think this could be like knowing how to cast a horoscope, without believing the outcomes. But to know how to talk is to know, very generally (with a certain acceptable error rate), what is true and false - what it is appropriate and inappropriate to say. (b) cannot be an appropriate thing to say in any playable language game, so a person who knew how to talk (in the relevant context) would also assert that (b) was false. Someone who believed it wasn't possible to talk, but knew how to 'mimic' talk in this way, would sound as though they believed it was possible to talk - they would sound just like everyone else.
We might say 'but they would be lying'. This claim is not ruled out, but finding grounds for it is complicated by the fact that it depends upon certain interpretations which have an irreducably normative element to them. It is not simply a 'matter of fact' whether someone is an honest interlocutor or not, particularly where the evidence of their dishonesty is not acknowledged in any conversation we have with them.
Tuesday, April 27, 2010
Chinese beetle consciousness ...
Suppose you are in a prison, in solitary confinement. You are entirely restricted to a cell with no access to the outside world except for a window you can look out of, and a small hole through which you can send and receive messages to and from other prisoners.
No one lives outside the prison.
Suppose you look out your window and see a tree, and send a message to other prisoners asking them what they see outside their windows. Some say 'a tree' and some say 'nothing'.
You might imagine that some of the prisoners look out the same side of the prison as you do - where the tree stands - and others look out the other side - where this is no tree. But you might also imagine many other possibilities consistent with the answers that you have received.
You might ask further questions to choose between these possibilities.
What if you ask other prisoners where they live, and they all answer 'a cell'. What would this mean? Is 'a cell' the same kind of place that you live, or is it just a name for the place that a prisoner lives?
What questions could you ask that would help you to answer these questions? What if different prisoners' cells were different in some respects, but were still called 'cells'? How different could they be?
What if there were no prisoners, but only computers, scanners, and printers on the other sides of the holes in the walls?
Some speculations can be checked with other prisoners, and some cannot - can only be entertained in the conversation we are having now, and not in any conversation that would be possibly among the prisoners. (If you send a message saying 'are you computer?' and get the answer 'no', would you know more than you did before you asked?)
We should not be deceived, though, by the possibilities which we can entertain in our present conversation, nor into believing that we can intelligibly speculate - within it - on certain of its constraints.
I might ask you where you live, and you might (reassuringly) say 'here, in my consciousness ...'
Monday, April 26, 2010
Consciousness
I think I can imagine what it would be like for me to be my cat - what it might be like to be a human person in a cat's body, perhaps with some specific and recognisable cat-like desires and capacities. Would my eyesight be the same but more acute? What about hearing and smell?
Would I only have a cat brain, unable to think very clearly about what I am imagining? (!)
When I imagine being you, I really only imagine what it would be like for me to be you. For me to be standing where you are standing, to have some of your physical characteristics. Perhaps also some of your capacities.
Suppose you have no idea what I'm talking about: would I have to imagine that as well? How would I do that?
The 'mystery' of consciousness arises from thinking that things are not like this; from thinking that I can 'really' imagine being you. This is what makes us think there is something 'commensurate' or 'similar' going on here - that your consciousness is 'like' mine.
But if I imagine being you, in some complete way, I would also be imagining not being me - I would be imagining having no idea that I had ever been me; perhaps that this problem had never occurred to me.
'Consciouness' is a mystery if I think that by attributing it to you I am attributing what I think of as being me to you - if I think you are 'just like me' but with some different interests, capacities, understanding, and senory inputs.
I can imagine being me, with these differences, but I can't literaly imagine being you.
Saturday, April 24, 2010
What is the world like?
It is the kind of place in which questions can be asked, the kind of place about which we can talk. Which gives it some empirical structure.
It is also like something we can say about it - if we say 'the world is real' then we can say 'it is as though the world is real'. Everything that 'is true' can also be a way that we can talk. We can say 'there are objects', and 'we can talk as though there are objects'.
When we say 'but what is the world really like?' we seem to be asking for an answer which cannot be re-rendered in this hypothetical form. The grammar of this leaves only one possibility: 'We can talk as though we can talk'.
We might also be asking for an answer which validates our capacity to ask questions and give answers, but this is confused. If I don't know whether I can ask questions and get answers, I wouldn't know whether a proper questioning and answering had taken place when I try to ask.
Tuesday, March 23, 2010
Truth and Commercial Language
"Commercial language and the pursuit of truth: How to answer an ancient question"
It's an overview. It also needs de-coloquialising. (!)
Recursive roots and wild speculations ...
"There is some theorem of arithmetic" is probably equivalent to "arithmetic is consistent", which more or less falls with the second incompleteness theorem.
Although maybe we have the wrong idea about what arithmetic is ... at least from a natural language point of view?
Tuesday, March 16, 2010
Is Goodman's paradox self-referential?
'Grue' is a colour predicate, and therefore has the 'logical grammar' of a colour word. However, if we attempt to describe a world where objects have grue-type qualities, it is exactly the possibility of colour language that we undermine.
To get to the 'observed vs. unobserved' version of the paradox, we are asked to speculate on the possibility that colours change when we look at them. This possibility could be entirely unobjectionable - we could say, for instance, that our experience of colour requires an interaction between light waves and our visual system, so it's unintelligible to speak of the 'colour' of an unobserved object. On the other hand, it could refer to a colour concept which has no 'use' (in the Wittgensteinian sense) - which depends upon the possibility of assertions which have truth conditions which cannot, in principle, be tested for. We would have no occasion to choose 'grue' over 'green', or vice versa, in our descriptions - these words would be functionally synonymous. In other words, this version of the 'paradox' either has reasonable consequences or no practical consequences. To argue that it has conceptual consequences is to require too much of colour concepts - it is to require, among other things, that they must be 'complete' in Waisman's sense. Almost no empirically based concepts have this quality.
In the 'after time t' version, we generate an indefinitely large number of paradoxical colours - at least one for every possible time t, and many more if we ring a few permutations (why restrict ourselves to temporal boundaries? and why only one?). If we needed to eliminate all the possibilities these represent before being able to reliably attribute colour qualities, we would have no colour language - we could never get it going.
In other words, although 'grue' behaves (grammatically) like a colour, it can only be constructed by requiring too much of our colour concepts or by undermining the possibility of colour concepts altogether. It is self-referential, because it is constructed from an existing grammar - and incoherent, because it does this in a way which questions the possibility of that grammar.
We can know that we need not consider grue-type possibilities from the usability of our colour language, and if that language breaks down then so does the possibility of constructing gruenesses.
Thursday, March 11, 2010
'True for us'
Can't we only say that "We can talk" is true for us, rather than in general?
If a statement must be true in any language game that we can play, then 'true for us' is 'true in general'. This is because 'true for us' suggests 'false for someone else' - someone playing a game we cannot play presumably. The trouble with this is that if it's a game we cannot play, then it's a game we can't translate - so we we don't know whether we're dealing with a language at all. This is obviously Davidson's line, and I think he's right. Also from Davidson, we can see that a translation hypothesis which rendered the belief system of the subjects being translated absurd or false would (principle of charity) be rejected. (If we did not reject translation hypotheses of this kind, we would have no grounds for rejecting any translation hypotheses - if we allow that people talk nonsense, we can allow them to 'say' anything at all.) Since 'we can talk' must be true for us, a translation hypothesis rendering a 'foreign' expression as 'we cannot talk' would have to be rejected.
If it isn't 'true for us' then we can't intelligibly hypothesise that it is true for someone else.
Hinges again ...
This points, of course, at the questions which I think are important - to do with the hinges of any possible language game and the (fundamental) unintelligibility of asking whether we can talk - whether we can ask questions. Whether it is possible to do philosophy is not a philosophical question.
Monday, February 15, 2010
Meaning, metalanguage, and internal/external criteria
We can talk about the twin worlds, and their confused (?) inhabitants, but they cannot - without resolving the ambiguity his example depends on. In order for them to talk to one another, the water/twater issue would have to be resolved.
We can't hypothesise that we may, in some systematic way, be making water/twater mistakes, or be subject to water/twater ambiguities - we might as well hypothesise that we are confused about our hypotheses.
The conversation we are having now is always at the top of whatever meta-hierarchy is coherently conceivable. We can hypothesise about the confusions of users of lower level languages, but we can't extrapolate upwards from these hypotheses. And: interpretations which render the native confused are always corrigible (Kripke again ...). We can only (and we must) make incorrigible mutual attributions of intention in a shared conversation.
Davidson's idea that we must always be interpreting each other also has to be considered in this light: we do not have an internal conversation within which we make 'interpretive judgements' about what an external interlocutor means. We are able to make correct judgements about this, but these judgements can only be considered when they are expressed - when statements of them form part of the conversation. At this stage, the 'internal' becomes just whatever it was - whatever private precursor or accompaniment we happened to have.
If we imagine this 'internal' separated from the external statement, what is it that we are imagining? Just our own beetles. Not irrelevant, but not an 'internal representation' either.
Whatever is going on 'inside my head' (whatever beetle scratches there ...) it only gets out, it only becomes available to philosophical theory, as the intentional component of my participation in this conversation. And by that stage, all the important bridges have been crossed. It has become my 'beetle', and not [my beetle].
In this context, as well, the question of 'first person authority' becomes an issue of linguistic competence - the 'capacity' to correctly report on (or otherwise reflect) our intentional states when we speak. We attribute this 'authority' when we attribute interlocutor status - "You have no idea what you really think" is, here, more or less equivalent to "You do not know how to speak". This is not, of course, a playable move - it can't be part of a shared conversation.
Wednesday, February 03, 2010
Recursive roots
There's a relationship between inductive mathematical proofs and recursion - recursive functions can have their values determined by a definite finite algorithm, and by demonstrating that a function is recursive we are demonstrating that such an algorithm exists (whether or not it is practicable).
Inductive proofs depend upon the induction axiom in formalisations of arithmetic. By showing that zero has a certain property, and that the successor of any number with this property must also have it, we show that all numbers have it. This is an axiom of arithmetic (and it needs to be formaly stated in a way which avoids talk of 'properties'). The 'numerical' root of induction - zero - is also defined axiomatically as the cardinality of the empty set. Many proofs for the whole of arithmetic must depend upon this inductive principle, and its stipulated root.
It is this formalised arithmetic - which establishes induction and zero as axiomatic - which can represent axiomatic systems and theorem generation in general and so its own axiomatisation a` la Gödel.
If I have the right picture of a natural language, then the recursive roots of empricial demonstration will comprise the statements which must be true for the language - and so for argument - to be possible. These roots are not arbitrary, though, since they appear directly as statements about the possibility of language and argument (which cannot be rendered 'formally' - i.e. stripped of semantic content).
Could we write down a specification for a 'naturalised' arithmetic which took the possibility of arithmetic and of computation as its recursive roots, rather than a specific set of (semi-arbitrary?) generative rules?
Some might say: but these rules just are arithmetic. No conception of arithmetic is possible without them.
Maybe this is a place where we are deceived by our intuitions. Can we ask questions like 'Is arithmetic fundamentally about counting, about succession?' or 'Is arithmetic fundamentally about computation, about argument?'. And we have reasons for thinking a certain kind of computation (algorithmic, recursive) goes along with the possibiltiy of counting, of enumeration.
Suppose, however, that we started with a different number: not the cardinality of the empty set, but the Gödel representation of the statement that there is some statement x such that x is a theorem of arithmetic. On the face of it, this does not require completeness, but it does require consistency (since otherwise ~x would also be a theorem). This number must exist in any arithmetic (I think?), and must allow the generation of other numbers (or it wouldn't be arithmetic?) in a way similar to the way zero and succession work in Peano arithmetic.
The value of x (and so also its representation) is a function of the axiomatic system and rules selected, and the coding system used to Gödelise them (as Chaitin's Omega number is a function of the actual structure of the machine it describes). This is true of the theorems of arithmetic generally, of course, but these depend upon a definition of zero that is not open to further interpretation, that 'has no semantic content'.
(Maybe we should think of the value, representation, and 'meaning' of zero as being generated by the rules and coding system as well. But zero is also a word in our natural langauge...)
Anyhow: If there is some number which is the code for a statement that there is some theorem of arithmetic (in the syntax of the axiomatisation that is being encoded), then is that number (or its isomorphs in differently coded and axiomatised, but 'semantically equivalent' systems ... can this make any sense?) a candidate for a general recursive root for a 'natural' arithmetic?
Could its 'existence' be more 'necessary' than the unique cardinality of the empty set?
Monday, January 25, 2010
Another Summary
Tuesday, January 19, 2010
Necessary Paradoxes
We need 'is true' to talk about all these kinds of rules. We cannot ('in principle') write down all the rules - we never reach the end of experiment.
'Semantically closed' languages must be 'empirically' open. It is the paradoxes that give us science.
Tuesday, January 05, 2010
Prediction, Theories of Truth
In outline, it is the argument that a comprehensive 'predicting machine' would have to incorporate a theory of truth for the language of any people whose behaviour it was going to predict, and so it's workings would be incomprehensible - unintelligible - to them. It would have to be a 'black box'.
The problem is this:
The machine could predict behaviour - physical movements etc. - without intentional content. In fact, if it was a machine which conformed to the traditional deterministic metaphors, it would be restricted to doing only this. Someone examining the output of the machine might interpret it as having content - 'I know what these sounds mean', or 'I know what is described here - it is a person who is thinking this'. But this interpretation would be made now - in the conversation of the interpreter. It would not be a necessary component of the raw output.
If the output were to have semantic content, the predictor could not be just a mechanical device. A semantic predictor would need to produce interpreted output, in the language of the user. It is this kind of predictor - an 'oracular' predictor - which would need to incorporate a theory of truth for the language of its subjects, but a predictor of this kind could not be rendered in rule based terms - because it could not be a 'machine'. It would have to be capable of normative judgements. A prediction such as 'Mary will fall in love with John, and will tell him' could be fulfilled in a number of physically quite distinct ways. The oracular predictor would need to know what our interpretive decisions would be in order to reliably make such a statement.
A computer which renders its outputs in a natural language can look like an oracular predictor, but examining its internal processes partly undermines this illusion. We would see how it 'appeared' to speak, and what the nature of our 'conversation' with it was. Whether, in some future, people might continue to attribute interlocutor status to objects such as this will be a matter for them.
What may bring them closer to this attribution would be a 'mechanical' understanding of how biological human beings speak - and this is some way away. It is likely that the models will be so complex that few will understand any part of them, and no-one will understand them fully.
In this kind of world the metaphor of universal determinism will seem absurd - it will seem to be based on an astonishingly naïve conception of 'science' as something that could be grasped or managed by individual human beings. The world will seem to be made up of 'black boxes' ... maybe not such an unfamiliar world after all.
Semanitc Rules
A 'use' theory of meaningfulness, and maybe of meaning, could be constructed in terms of 'meaning committments' - i.e. the further moves that must be accepted as 'legitimate' in order for a move under consideration to be meaningful. It may be that an explication of the further moves is also an explication of the meaning, and the fact that this could never be exhaustive wouldn't matter because only some further moves would be relevant to any particular context.
There would be irreducibly semantic aspects of the further moves - meanings can only depend on other meanings.
When we describe a a 'move' in this game, the description will always refer to some semantic elements. So, therefore, will the rules of the game - which must rule some moves in and others out. (The rules of logic are not exempt here - see Logic and Interpretation).
In any case, the apparent clarity of any rule depends upon it's being embedded in an 'unproblematic' semantic framework.
Where does this go?