The argument against the possibility of a 'private' language (a language one knows, oneself, how to speak; but which is never used in a conversation with any other person) is, broadly, that our judgements that we were correctly following the rules upon which the intelligibility of the language depended would be no better than our judgements that we were speaking the language properly in the first place. No internal test of correct usage could avoid relying on judgements which were, themselves, open to questions about correct usage.
Public languages have access to a further resource - the judgements of other language users. A usage which could not be rendered intelligible to these could not be part of the shared language. As Kripke observes, however, this only helps with judgements within the language, and not with judgements about the intelligibility of the language as a whole, where the problem of external validation arises once again - in a fairly catastrophic way. So the PL problem is not just a problem for strictly private languages.
The solution to Kripke's problem is, as I've suggested, to realize that all of our judgements of validity and intelligibility are also moves within a language game - to ask whether the whole game (to include 'all' language games) is valid or intelligible is to ask whether judgements of validity and intelligibility are possible. The answer to this question is either 'yes' or unintelligible.
An interesting question is whether this approach can be taken to private languages as more traditionally explicated (as in the parentheses in paragraph 1). The motivation for asking this question is that something like a private language seems to be required to rescue the idea of intelligible private thought, in so far as we are able, in thought, to reflect on the reliability or our own perceptions and judgements.
One way we do this, of course, is to apply our public conceptions of validity and intelligibility to our private ruminations - either by imagining how they might be articulated in a public language, or by actually articulating them. Our imaginings here are at least as reliable as our imaginings that we can speak ...
The problem with this approach is that it requires the private language to be translated into the public one for testing - so rendering it non-private. The private language can only be private in the required sense if it cannot be so translated, a fact which strengthens the PL argument rather than otherwise, given Donald Davidson's observations on how we determine whether something is or is not a language.
Untranslatable private thought is just part of that 'of which we must remain silent'. Why would we need to worry (somewhat unintelligibly ... ) about its 'intelligibility'?
Only because we project inwards from public intelligibility? We will find what we are looking for if we do this, as Chomsky and Fodor demonstrate. Chomsky finds that anything that looks like a language must look like the language he (more or less) shares with us; Fodor finds that for any internal process to come out as intelligible it must be translatable into the language of his enquiry, which we (more or less) follow.
But if even the mechanisms which underlie public intelligibility need not be rendered as generally reliable (see my last post on the Turing test), why should we expect our phenomenological experience of them to be intelligible?
Of course what we say must make sense, but that making sense is such a struggle suggests that it cannot be just a matter of translation ...
Wednesday, January 07, 2015
Saturday, January 03, 2015
The Turing Test
The Turing test captures the ambiguities of 'intelligent', which is one of the reasons it is so compelling.
One of these is that it leaves moot the relevance of the internal structure of the talking machine. This is just as well given the Kripke/Goodman paradox, and open question problems. If the structure of the machine was relevant - if the machine need not only pass a specific finite test, but would also be required to pass all imaginable future tests of the same kind, then we'd be in trouble.
The OQ question here is obvious - that the programing of the machine would have to embody a general language competence engine, and therefore a theory of truth for the language the machine spoke.
The Kripke/Goodman problem is worth a further comment, though. Clearly a machine that incorporated a flawed language generation program could pass the Turing test so long as the specific test circumstances did not reveal the flaws. Would we say here that the test was imperfect?
If we did, then we would (as the paradox suggests) have to say that we could not conclude that any potential interlocutor was really an interlocutor (capable of intelligent conversation), and so that we could not conclude that any conversation was possible. So far so Kripke, and I've already explained the problem here: if we can't say anything we can't articulate the paradox either, so there must be some other way of thinking about this.
Since we know that we can talk to one another, we don't need to worry about how, or why our words make sense. In fact, we can only consider these questions if we already know that their answers can't 'legitimize' our use of language.
What this means is that we don't need to worry about whether we are all imperfect, accidental, Turing 'intelligences'. What counts as intelligence is not intrinsic to the machine, but captured, instead, in the language game in which any question about 'intelligence' can arise - and this neither can be, nor needs to be, modeled in some mechanistic way. It is the game within which formal modeling takes place, but which cannot, itself, be modeled (in its entirety).
So this aspect of the test's ambiguity cashes out nicely: what counts as an intelligent machine is a judgement we make within a game which itself provides the unarticulable context for our judgments of intelligence, and of rationality.
One of these is that it leaves moot the relevance of the internal structure of the talking machine. This is just as well given the Kripke/Goodman paradox, and open question problems. If the structure of the machine was relevant - if the machine need not only pass a specific finite test, but would also be required to pass all imaginable future tests of the same kind, then we'd be in trouble.
The OQ question here is obvious - that the programing of the machine would have to embody a general language competence engine, and therefore a theory of truth for the language the machine spoke.
The Kripke/Goodman problem is worth a further comment, though. Clearly a machine that incorporated a flawed language generation program could pass the Turing test so long as the specific test circumstances did not reveal the flaws. Would we say here that the test was imperfect?
If we did, then we would (as the paradox suggests) have to say that we could not conclude that any potential interlocutor was really an interlocutor (capable of intelligent conversation), and so that we could not conclude that any conversation was possible. So far so Kripke, and I've already explained the problem here: if we can't say anything we can't articulate the paradox either, so there must be some other way of thinking about this.
Since we know that we can talk to one another, we don't need to worry about how, or why our words make sense. In fact, we can only consider these questions if we already know that their answers can't 'legitimize' our use of language.
What this means is that we don't need to worry about whether we are all imperfect, accidental, Turing 'intelligences'. What counts as intelligence is not intrinsic to the machine, but captured, instead, in the language game in which any question about 'intelligence' can arise - and this neither can be, nor needs to be, modeled in some mechanistic way. It is the game within which formal modeling takes place, but which cannot, itself, be modeled (in its entirety).
So this aspect of the test's ambiguity cashes out nicely: what counts as an intelligent machine is a judgement we make within a game which itself provides the unarticulable context for our judgments of intelligence, and of rationality.
More on emergence and heuristics ...
Some more thoughtful thoughts on emergence:
Of course a general theory which drags in 'emergence' must be false, but we shouldn't leap to conclusions here. The important question is why we need that particular general theory, what it does for us. This can be a hard question to address, because addressing it requires us to decode some of our fundamental heuristics - many of which will have acquired an almost metaphysical status, so that relinquishing or revising them will have disturbing emotional consequences.
If we believe, for instance, that a rational science is only possible if the world can be reduced (at least in principle) to mechanism, then we will hang on to emergence to retain our rationality. (The thought that this, itself, may be an irrational strategy, will be almost too terrifying to entertain.)
Also, spooky alternatives to emergence simply don't explain anything. The ghost in the machine is just another place-holder for ignorance, or evidence of error - perhaps about the kind of machine, or about what can be done with mechanism, but error nonetheless.
The way forward is not via some more comforting metaphysics, however. (All metaphysical accounts will also be place-holders, if something like what I'm saying turns out to be correct).
The way forward is to start with the irreducibles of account giving, which include the possibility of accounts and of the givers of accounts.
Of course a general theory which drags in 'emergence' must be false, but we shouldn't leap to conclusions here. The important question is why we need that particular general theory, what it does for us. This can be a hard question to address, because addressing it requires us to decode some of our fundamental heuristics - many of which will have acquired an almost metaphysical status, so that relinquishing or revising them will have disturbing emotional consequences.
If we believe, for instance, that a rational science is only possible if the world can be reduced (at least in principle) to mechanism, then we will hang on to emergence to retain our rationality. (The thought that this, itself, may be an irrational strategy, will be almost too terrifying to entertain.)
Also, spooky alternatives to emergence simply don't explain anything. The ghost in the machine is just another place-holder for ignorance, or evidence of error - perhaps about the kind of machine, or about what can be done with mechanism, but error nonetheless.
The way forward is not via some more comforting metaphysics, however. (All metaphysical accounts will also be place-holders, if something like what I'm saying turns out to be correct).
The way forward is to start with the irreducibles of account giving, which include the possibility of accounts and of the givers of accounts.
Subscribe to:
Comments (Atom)