Search This Blog

Monday, July 19, 2010

Rational attributions of intention

While no behavioural narrative can fully justify an attriubution of intention, it is clear that an attribution of intention can be inconsistent with a narrative. We may not have evidence that allows us to discriminate between adding and quadding, but still have evidence that woudl allow us to discriminate between adding and subtracting.

The possibility of the evidential narrative, itself, depends (of course) on some attribution of intention (to its narrator and to its audience). Maybe we can set this aside for the moment - allowing that some agreed behavioural/evidential narrative is possible, without enquring too much.

From this narrative, we can rule out certain possibilities (e.g. subtraction). Do we do this on the basis of 'facts', or only on the basis of some buried intentional attribution in the narrative?

If I say 'he wrote down this: "2+2=4"', I don't see how I can discard the meanings (e.g. by giving some description of the lines forming hte characters) without also losing the ability to rule out subtraction.

It might be the case that we can't describe behaviour without introducing (and so ruling out) some intentional content.

Although: a computer programme can look exactly like this. It may describe, in a complex way, how pixel elements on a screen can be made to change their character without saying 'this is how to display the letter "F"'. A slight peculiarity here is that the non-intentional explanation of what is happening is frighteningly complex - more complex than a human being could understand, if 'understand' means recognising what the programme is doing (printing "F").

In real programmes, we find a hierarchy of machines. At one very low level, there are machines for handling the pixels and changing their status. At a higher level, there are machines for assembling instructions to these low level machines which 'draw lines', or 'print characters'. Other machines help with re-directing these instructions so that we get 'print the character held in x at the present cursor position' - at which stage we are clearly dealing with instructions which have comforting intentional content. We may decide ourselves whether the linguistic structure here is just a convenient mnemonic, or whether we are succeeding in producing intentional behaviour on the part of the machine. This is a normative issue.

We can make a machine which 'appears' to talk to us, and we think we've achieved some insight into 'how talk works', but all we have done is make a machine behave in a way which invites an intentional interpretation by, among other things, appearing to talk and, particularly, appearing to talk in a way which rules out certain specific intentional interpretations.



Sunday, July 18, 2010

Machine people

I can't calculate a paradoxical Gödel number. This is a 'fact' about me - about the kind of machine that I am?. We know, I think, that this is a fact about any possible machine. But, of course, he has proved that these numbers 'exist', despite this. They do not, of course, exist in any machine world - any world of of 'facts' - however organised. And so, in no 'possible' world of facts? And what would this mean?

I suppose, at least, that we shouldn't worry about them so far as 'practical' (machine manageable) arithmetic is concerned.

This machine world doesn't render up rules. It can't define them.

But a 'describable' or 'completely narrated' machine world's rules would be contained in its narration - we could only desribe it in this abstract way. Even a single determinate fact has a hidden rule - that a description of it is always true. If we can't narrate a world without some rules being true, then these rules might as well be in the world we are narrating.

But 'might as well be' ... this is metaphysics. It is 'as if' the world contains these rules. We can talk 'as if' the world contains these rules. Even: we could not talk except to talk as if the world contained these rules, except here the 'cannot' is a product of an argument within our talk, and is perhaps circular.

Wednesday, July 07, 2010

Talking machines (revisited)

Being able to talk is not the same as, say, being able to produce some string of characters (like the one which comprises the coding of this post). We can see two instances of the same string of characters and say of one 'this was produced by an interlocutor, and is part of a conversation' and of another 'this was produced by a machine, and is a recording or representation (etc.)'. In principal, it wouldn't matter how long this string was.

It couldn't be the coding for all human conversation to date, however, because it wouldn't mean anything if it did.

If we say 'the whole of human conversation to date is a string produced by a machine', we would have to wonder what language we were saying this in. If we are including it within its own scope - as part of human conversation to date - then either (a) it is false or (b) it doesn't mean anything because it's just a string of code. If we are not including it within its own scope, we might wonder how it, uniquely, could mean something if everything that looked as though it has meant something up to that point was just a string of code. A language game cannot comprise a single sentence standing on its own.

Should we think of ourselves as being the ('deluded') components of some greater machine which we cannot 'comprehend'? This could only ever be a metaphor - we could not describe the machine we were 'part of' well enough to call it a machine. Those elements of its workings we did not understand we could not distinguish from random, or at least radically unpredictable. And there would always be elements we did not understand because we are part of the machine, and we can't model ourselves. The machine cannot produce a string which encodes a complete description of itself (including its capacity to produce this string).

Who would this description be for? What is the language within which this 'representation' would work? (i.e. would be a legitimate move...)

A machine can't have a private language either.

The halting problem is also in here somewhere ...

Suppose that we imagine the machine to be following some rule - we see that the strings it produces, when interpreted in a certain way, do not breach the no contradictions rule. This wouldn't allow us to hand our adjudications on the rule over to the machine - it might not always behave correctly; it might break down. The machine can follow the rule, but it can't define it - and neither can any other machine.

(Maybe this is 'why' the halting problem arises ...)

What do we do when we say 'this is right' or 'this is wrong'? We state a rule that we are going to follow - we specify a part of the theory of truth we are using. Our justifications of these statements are always 'incomplete', in the sense that they cannot look outside our language, in which they are expressed. Our ability to agree empirical hinges is only different from our ability to agree logical or mathematical hinges in so far as it is more 'mysterious' to us (epistemologically). They are all rules which, if broken, render the conversation impossible - including, in extremis, any conversation in which we might be able to say why this conversation had become impossible.

Saturday, July 03, 2010

Perfect Machines

A machine is an arrangement of facts into function, and whether 'facts' are intentional, funtion irreducibly is. The machine cannot define the function: 'or' is not 'what an or gate does', except in the sense that it's what it is designed to do. How do we know when a machine goes wrong?

This is just the 'naturalistic fallacy', slightly disguised ...

Another open question argument? But not quite as boring as that.

If I am a machine then 'or' cannot be defined in terms of anything I do either. What sensible conclusions can we draw from this?

There is the 'private language' issue - we know that 'or' can't be defined in terms of something internal - something I 'think'. But isn't a collection of machines also a machine?

This looks unavoidable, but what is the function of this 'greater' machine? We don't know. What would a 'function' of this kind look like? Function for what? To whose purpose?

We might say: a functionless collection of facts may still have some order - it may do something, but not something useful. What order would it have? Only the order of some facts. An ordered groups of facts is just another fact. We might make a mistake about the order, but this would not render the group of facts 'malfunctioning'. How would we discover this order? Just as we discover the ordering of facts which allows us to build our machines. The Universe doesn't get things 'right' or 'wrong'. The great collection of machines can't get things 'right' or 'wrong' either. It is just the way it is.

But we get things right and wrong. If we didn't, we couldn't talk to one another. Another dull chant?

Thursday, July 01, 2010

Meta-epistemology

A scientific theory need only be presently intelligible - 'unfalsified', in traditional terminology. This can mean that, given other hypotheses, its contrary is unintelligible.

The contrary of an epistemological theory needs to be unintelligible tout court.

If it isn't, then the theory is hostage to any hypothesis on which its contrary's unintelligibility depends. More open questions ...