While no behavioural narrative can fully justify an attriubution of intention, it is clear that an attribution of intention can be inconsistent with a narrative. We may not have evidence that allows us to discriminate between adding and quadding, but still have evidence that woudl allow us to discriminate between adding and subtracting.
The possibility of the evidential narrative, itself, depends (of course) on some attribution of intention (to its narrator and to its audience). Maybe we can set this aside for the moment - allowing that some agreed behavioural/evidential narrative is possible, without enquring too much.
From this narrative, we can rule out certain possibilities (e.g. subtraction). Do we do this on the basis of 'facts', or only on the basis of some buried intentional attribution in the narrative?
If I say 'he wrote down this: "2+2=4"', I don't see how I can discard the meanings (e.g. by giving some description of the lines forming hte characters) without also losing the ability to rule out subtraction.
It might be the case that we can't describe behaviour without introducing (and so ruling out) some intentional content.
Although: a computer programme can look exactly like this. It may describe, in a complex way, how pixel elements on a screen can be made to change their character without saying 'this is how to display the letter "F"'. A slight peculiarity here is that the non-intentional explanation of what is happening is frighteningly complex - more complex than a human being could understand, if 'understand' means recognising what the programme is doing (printing "F").
In real programmes, we find a hierarchy of machines. At one very low level, there are machines for handling the pixels and changing their status. At a higher level, there are machines for assembling instructions to these low level machines which 'draw lines', or 'print characters'. Other machines help with re-directing these instructions so that we get 'print the character held in x at the present cursor position' - at which stage we are clearly dealing with instructions which have comforting intentional content. We may decide ourselves whether the linguistic structure here is just a convenient mnemonic, or whether we are succeeding in producing intentional behaviour on the part of the machine. This is a normative issue.
We can make a machine which 'appears' to talk to us, and we think we've achieved some insight into 'how talk works', but all we have done is make a machine behave in a way which invites an intentional interpretation by, among other things, appearing to talk and, particularly, appearing to talk in a way which rules out certain specific intentional interpretations.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment