G.E. Moore is associated with three paradoxes:
(1) the "paradox of analysis" - that the explication of a conceptual equivalence can be either correct or informative, but not both;
(2) "Moore's paradox" - that some well-formed sentence functions can produce nonsensical sentences when certain variables are given the value of present indexicals;
(3) the "naturalistic fallacy" which is a particular case of an open question paradox.
(1) and (3) directly bear on the limitations of the analytic programme (as traditionally conceived). (2), I think, rescues a productive conception of analysis from this catastrophe.
Here's a summary:
'Analysis' was (informally?) conceived as being about rendering a concept in terms of others independent of it (to avoid circularity), in a way which was informative - which elucidated the concept under examination. Paradox (1) arises because it seemed that additional information only arose if the analysis introduced things which were not part of the analysand. Moore compared 'A brother is a male sibling' with 'A brother is a brother'. If the concept 'male sibling' contains nothing not already included in concept 'brother', the statements should be equivalent - but the first is clearly an analysis, while the second is not.
(The paradox depends upon the concepts being taken to be independent of the words used to articualate them - a presumption that might now be challenged.)
"Moore's paradox" is associated with the peculiarity of saying "It is raining, but I do not believe that it is raining". It is sometimes taken to be about the logic of 'believe', but is more correctly understood as the problem that a properly formulated sentence function should produce sentences that are either true or false as its variables are given values, but should not produce nonsense.
Here are two well formed sentence functions, which will evaluate as true or false, depending on the values given to x and y:
(a) "x believes that y"
(b) "it is not true that y"
Their conjunction "(a) and (b)" is also, therefore, a well formed function, and some substitutions seem unproblematic:
"Mary believes that it is raining, and it is not true that it is raining."
The paradox is that "I believe that it is raining, and it is not true that it is raining" seems to be nonsensical, or at least unintelligible, rather than false. It is not a move that could ever be made in a playable language game.
Open queston paradoxes, which I've talked about at length, arise when a normative category - e.g. "Truth", "Goodness" - adjudicates over any account that can be given of it. Is it true that telling the truth requires us to X? Is it good to define goodness as Y?
OQ paradoxes are pervasive. They might be said to delimit the analytic tradition, as they arise when we try to give reductive accounts of the foundations of knowledge, truth, meaning, goodness, and arithmetic. Gödel's incompleteness theorems and Russell's paradox are both examples - one to do with valid computations of validity, and the other with distinguishing the membership of 'set'. Even the Kripke/Goodman paradox arises when we try to render the rules for distinguishing rule-following.
It seems natural, now, to replace our naive conceptions of analysis with a more language oriented approach. I have argued that certain questions about the possibility of sense-making have a Moorean peculiarity: We may say that John and Mary completely fail to understand one other, but I cannot say to you that you and I do. The switch from third to first person generates nonsense, not falsehood. And this saves us from the catastrophic effects of the OQ problems - which must arise in, are indeed the distinguishing mark of, the most general philosophical enquiries. The apparent systematic ambiguities of truth, validity, knowledge are largely neutralised when we consider the way that type (2) paradoxes constrain our normative claims. That truth is 'unanalysable' does not make the possibility that we cannot tell the truth intelligible.
So. Good for Moore?
Thursday, December 24, 2015
Sunday, October 18, 2015
Foundations of arithmetic
I've
messed with this idea before (I found some examples by searching my blog for
'arithmetic', and was a little bit surprised at how many times ...).
I think I
have a clearer picture of it, now, though:
The
incompleteness and inconsistency proofs show that any attempt to formalise
arithmetic will generate open question paradoxes.
As I've
tried to suggest (clumsily) in earlier posts, a recursive approach to the OQ
trap is also possible for arithmetic. This is what needs a clearer statement.
It cannot
be a consequence of Gödel's paradox that arithmetic is not possible (that its
inconsistency is catastrophic per Duns Scotus) because the construction
of the paradox itself depends upon the reliability of arithmetical computations
(e.g. factorisation) and the truth of arithmetical theorems (e.g. about the
unique outcomes of factorisation). So Gödel's argument is as much a statement
of the possibility of arithmetic as it is a demonstration of its inconsistency.
In other
words, because all formal proofs can be represented within arithmetic
(using an equivalent of
Gödel's
coding method), the statement "There is some true theorem in
arithmetic" (the equivalent of "It is possible to do arithmetic")
is an isomorph of "It is possible to formally prove something". Since
this is a consequence of "It is possible to say something", we have
the possibility of valid arithmetic recursively from the possibility of being
able to talk.
We can
only talk if we can agree about the consequences of what we say. We can only
explicitly show some behaviour to be linguistic if we can give an account of
this agreement - in the form of 'A implies B' types of statement. In order to
explicitly pursue certain very general (unbounded) enquiries, we need to have a
language which can make these kinds of statements about its own components.
The
consequences of 'A implies B' statements must themselves include much of first
order logic if they are not to be inconsistent (and they would not be
meaningful otherwise because their meaning consequences would be
catastrophically inclusive).
In this
kind of language, then, we have the possibility of logic; and so, the
possibility of arithmetic. And we also, necessarily, have open question
paradoxes, since we cannot constructively prove that the language itself makes
sense - although we cannot deny this within the language.
Since
Gödel's own coding method depends upon the 'fundamental theorem of arithmetic'
- that all numbers have unique prime factors - his particular generation counts
as a kind of meta-proof of that theorem:
First
order logic is consistent because it is possible for us to talk to one another;
Arithmetic
is possible because first order logic is consistent;
Arithmetic
would only be possible if the fundamental theorem were true.
------------
It may
also have another consequence which I've mentioned before, I think:
If
factoring large numbers was 'tractable', in a way which brought Gödelised
logical proofs within the scope of practical arithmetic, then practical
arithmetic would include the computational inconsistencies that he shows must
be a consequence of arithmetic being able to demonstrate its own completeness.
There may
be a relationship - it seems likely to me that there is a relationship? -
between a language being 'open', in the sense of not placing explicit
constraints on meta-linguistic statements, and it's being usable for the
pursuit of certain (philosophical? fundamental?) enquiries. There must be no
stipulated boundary on 'why?' questions. We tacitly explore kinds of boundaries
as we explore the limits of intelligibility ("why do 'why' questions make
sense?" (!)), but these boundaries cannot be explicitly delimited within
the language.
Arithmetical
constructions permit, in principle, a kind of indefinite complexity. We can
point to numbers - e.g. Borel's number - which we say must 'exist', in the
sense that their composition does not breach any arithmetical rules (however
practically inconvenient they might be).
Our
practical experience of the tacit limits of meta-linguistic enquiry has an
ambiguous quality - it is hard to discriminate between complexity and
inconsistency. We don't know whether we are looking at a flat contradiction, or
whether we have failed to see a (possibly convoluted) set of meaning commitments
which would render a puzzling statement intelligible. We cannot (Kripke) rule
out the possibility that a speaker is following a set of rules that we do not
yet understand.
The
potential complexity of arithmetic (its 'open-ness') may depend on the intractability
of factorisation. If we could factor indefinitely large numbers in some
straightforward way, we could extract all their 'rules' and demonstrate that we
had done so. Questions of the form 'this number has this property' would
always be tractable.
Gödel's
method may have the corollary that this will never be possible.
Monday, September 07, 2015
The "mathematical world"
The world appears to be mathematical because mathematics is part of the grammar - maybe a necessary part of the grammar - of the language we use to describe it. I'm saying "maybe" because I haven't said what mathematics is. This turns out to be important, I think.
I want to look at a hypothesis from Max Tegmark:
H: "The world is (just) mathematics"
This is a paraphrase, but I don't think it's wrong in any important way. Other ways of representing his position will share it's contradictions.
Since H is a statement about the world, it must (according to itself) be a kind of mathematical conjecture - perhaps even a hypothesis than can be proved or disproved. However, this can't be the case, because it uses the word "mathematics". We don't have a definition of this word that allows us to use it in this way (like a number or a symbol).
I want to look at a hypothesis from Max Tegmark:
H: "The world is (just) mathematics"
This is a paraphrase, but I don't think it's wrong in any important way. Other ways of representing his position will share it's contradictions.
Since H is a statement about the world, it must (according to itself) be a kind of mathematical conjecture - perhaps even a hypothesis than can be proved or disproved. However, this can't be the case, because it uses the word "mathematics". We don't have a definition of this word that allows us to use it in this way (like a number or a symbol).
Tuesday, August 11, 2015
More meaning
The standard forms, the rules, seem to be essential to meaning, but this is misleading. It is only in evaluation that real meaning arises, not in 'analysis' (or at least not in any formal analysis).
If I repeat what you have said to me, however exactly, I am not doing what you were doing when you said it. I will have repeated form, not meaning. (Not use, intention, context, occasion, etc. ... an incompletable list).
I write this now only once, although I may reuse the words - even in this order - on other occasions. You may believe that you have understood them as a consequence of certain conventions, but when you try to articualte these you will find them melting away under your analysis: the concept of a convention cannot be elaborated by appealing to conventions, especially when it is our ability to recognise these that is being investigated.
We should not try to say new things, but only be aware that evey saying is new, and look to its novelty for its meaning.
You might accuse me, saying that I take it as some original, some article of faith, that we can speak. But your accusation is only an accusation on exactly those grounds. I do not 'believe' that we can speak, I just speak. It is in my speaking with you that the meaning of 'believe' arises.
If I repeat what you have said to me, however exactly, I am not doing what you were doing when you said it. I will have repeated form, not meaning. (Not use, intention, context, occasion, etc. ... an incompletable list).
I write this now only once, although I may reuse the words - even in this order - on other occasions. You may believe that you have understood them as a consequence of certain conventions, but when you try to articualte these you will find them melting away under your analysis: the concept of a convention cannot be elaborated by appealing to conventions, especially when it is our ability to recognise these that is being investigated.
We should not try to say new things, but only be aware that evey saying is new, and look to its novelty for its meaning.
You might accuse me, saying that I take it as some original, some article of faith, that we can speak. But your accusation is only an accusation on exactly those grounds. I do not 'believe' that we can speak, I just speak. It is in my speaking with you that the meaning of 'believe' arises.
Sunday, May 31, 2015
Underlying directions ... ?
We can distinguish two cases:
(1) We discover that we are following a tacit rule which we can articulate without substantially disrupting our present linguistic practices.
(2) We discover a similar rule, but its articulation would cause substantial semantic disruption.
The second is a bit puzzling - how can we 'discover' a rule without articulating it? I think the answer is that we make an experiment, and it has a revolutionary effect on the language community.
We can resist rules of the second type by refusing to articulate them. But we cannot articulate the rule 'do not articulate rule X' without articulating rule X. We can see approaches to dealing with this in authoritarian institutions. Or so I think. And maybe I'm getting ahead of myself.
A signalling system cannot be semantically 'open' - it would just be ambiguous, or incomplete. A language must be semantically 'open' if it is to develop (even in response to what we think of as 'empirical' inputs).
Signalling systems don't need to develop in this way. They are semantically closed. Perhaps we say that they have a formal semantics - which is as much as to say they have an eliminable semantics. So maybe no semantics at all. Apart from their dependence on the semantics of the language in which they are formally described.
Why would the discovery of a tacit rule cause semantic disruption? A simple case is dogmatic truth: insistence on a literal fundamental truth is vulnerable to open question observations. (To ask why the dogma is true is to contemplate the conditions under which it might not be.)
A language which cannot contemplate some of its own tacit presuppositions also cannot articulate which of those presuppositions cannot be contemplated. It must depend on habit or fear rather than argument.
The 'safe' thing to do will be to avoid too much ...
(1) We discover that we are following a tacit rule which we can articulate without substantially disrupting our present linguistic practices.
(2) We discover a similar rule, but its articulation would cause substantial semantic disruption.
The second is a bit puzzling - how can we 'discover' a rule without articulating it? I think the answer is that we make an experiment, and it has a revolutionary effect on the language community.
We can resist rules of the second type by refusing to articulate them. But we cannot articulate the rule 'do not articulate rule X' without articulating rule X. We can see approaches to dealing with this in authoritarian institutions. Or so I think. And maybe I'm getting ahead of myself.
A signalling system cannot be semantically 'open' - it would just be ambiguous, or incomplete. A language must be semantically 'open' if it is to develop (even in response to what we think of as 'empirical' inputs).
Signalling systems don't need to develop in this way. They are semantically closed. Perhaps we say that they have a formal semantics - which is as much as to say they have an eliminable semantics. So maybe no semantics at all. Apart from their dependence on the semantics of the language in which they are formally described.
Why would the discovery of a tacit rule cause semantic disruption? A simple case is dogmatic truth: insistence on a literal fundamental truth is vulnerable to open question observations. (To ask why the dogma is true is to contemplate the conditions under which it might not be.)
A language which cannot contemplate some of its own tacit presuppositions also cannot articulate which of those presuppositions cannot be contemplated. It must depend on habit or fear rather than argument.
The 'safe' thing to do will be to avoid too much ...
Saturday, April 18, 2015
Unintelligible Fundamentals (again)
The possiblity that the 'substrate' of intelligibility is, itself, intelligible, is blocked by open question considerations. (A language cannot be used to articualte a theory of its own intelligibility without begging the question.)
We have two possibilities: (1) that it cannot be explicated in any language (which is as much to say that it is unintelligible) and (2) that it can be explicated in a language which cannot be wholly translated into ours because of the open question problem.
The problem with (2) is that translatability is the only criterion of somethings being a language (Davidson). Whatever the 'super' language can do, that part of it that articulates the theory underpinning the intelligibility of our language cannot be translated, and so cannot be identified as a language at all.
So nothing intelligible can complete the sentence "Our language is intelligible because ..." that doesn't also beg the question. Again this is OK because 'our language is not intelligible' is unintelligible - we don't need a constructive account to guarantee intelligibility.
There are some further things to reflect on here:
'Our language is intelligible' has consequences for the way the world is organised, as this organisation cannot conflict with the intelligibility of this language we use for describing the world (among other things). This is not quite trivial: the world need not be organised in the way that we think it is (i.e. much of what we literally say about it need not be exactly true) in order for our language to 'work'. (On the other hand, we run into problems with what we can mean by 'true' if we try to push this too far - Davidson, again.) What we can be sure of, in a sense, is that any new discovery we make about the world (however disorienting to our present conceptions) will, if it is articulable, not turn out to be unintelligible. This keeps the 'world of things' within certain 'grammatical' boundaries.
Again, one 'fact about the world' that must be literally true is that the world permits us to talk about it in the way that we do.
Another thing is that 'our language' is not a well defined set of structures. It is constantly being experimented with, modified, and extended. A particular kind of extension is to make a new 'meta-linguistic' move - to take a step up the hierarchy. 'The way we have spoken up to now is intelligible' remains intelligible, even if in some suitably qualified way. We can say 'people used to believe that the earth was flat'. (If that was ever really true ...). If people used to speak as though the earth is flat, we can only render that as speech if we can show how such a thing might be intelligible.
In the future, we will always be able to attribute some suitably qualified intelligibility to what we say now. This fact might lead us to speculate that there might be some future super-language in which we can explicate a theory of intelligibility for the language we currently speak. This doesn't help with the general problem, though, as we could not translate what the speakers of that language meant by 'intelligible' when they applied that description to our language. In short we couldn't understand what the speakers of the super-language meant by 'suitably qualfiied', because this would require us to articulate, exactly, a theory of intelligibility for our language.
We can read this two ways: (1) That the idea of a future 'super-language' is unintelligible tout court. Incoherent science fiction. or (2) That, in the future, people may lose the capacity to speak in the sense that we presently understand it. (They might partially retain it, for the purpose of decoding archives ...)
On reflection, these are probably equivalent. With respect to (2), the speed and volume of human interaction is increasing fast, and the subsumption of the 'individual' into a larger 'super-organism' whose components' mutual signalling behaviours, while structurally complex, might have no determinable semantic content. Much as present human language might appear to an insightful chimpanzee...
We have two possibilities: (1) that it cannot be explicated in any language (which is as much to say that it is unintelligible) and (2) that it can be explicated in a language which cannot be wholly translated into ours because of the open question problem.
The problem with (2) is that translatability is the only criterion of somethings being a language (Davidson). Whatever the 'super' language can do, that part of it that articulates the theory underpinning the intelligibility of our language cannot be translated, and so cannot be identified as a language at all.
So nothing intelligible can complete the sentence "Our language is intelligible because ..." that doesn't also beg the question. Again this is OK because 'our language is not intelligible' is unintelligible - we don't need a constructive account to guarantee intelligibility.
There are some further things to reflect on here:
'Our language is intelligible' has consequences for the way the world is organised, as this organisation cannot conflict with the intelligibility of this language we use for describing the world (among other things). This is not quite trivial: the world need not be organised in the way that we think it is (i.e. much of what we literally say about it need not be exactly true) in order for our language to 'work'. (On the other hand, we run into problems with what we can mean by 'true' if we try to push this too far - Davidson, again.) What we can be sure of, in a sense, is that any new discovery we make about the world (however disorienting to our present conceptions) will, if it is articulable, not turn out to be unintelligible. This keeps the 'world of things' within certain 'grammatical' boundaries.
Again, one 'fact about the world' that must be literally true is that the world permits us to talk about it in the way that we do.
Another thing is that 'our language' is not a well defined set of structures. It is constantly being experimented with, modified, and extended. A particular kind of extension is to make a new 'meta-linguistic' move - to take a step up the hierarchy. 'The way we have spoken up to now is intelligible' remains intelligible, even if in some suitably qualified way. We can say 'people used to believe that the earth was flat'. (If that was ever really true ...). If people used to speak as though the earth is flat, we can only render that as speech if we can show how such a thing might be intelligible.
In the future, we will always be able to attribute some suitably qualified intelligibility to what we say now. This fact might lead us to speculate that there might be some future super-language in which we can explicate a theory of intelligibility for the language we currently speak. This doesn't help with the general problem, though, as we could not translate what the speakers of that language meant by 'intelligible' when they applied that description to our language. In short we couldn't understand what the speakers of the super-language meant by 'suitably qualfiied', because this would require us to articulate, exactly, a theory of intelligibility for our language.
We can read this two ways: (1) That the idea of a future 'super-language' is unintelligible tout court. Incoherent science fiction. or (2) That, in the future, people may lose the capacity to speak in the sense that we presently understand it. (They might partially retain it, for the purpose of decoding archives ...)
On reflection, these are probably equivalent. With respect to (2), the speed and volume of human interaction is increasing fast, and the subsumption of the 'individual' into a larger 'super-organism' whose components' mutual signalling behaviours, while structurally complex, might have no determinable semantic content. Much as present human language might appear to an insightful chimpanzee...
Sunday, April 12, 2015
Unintelligible Fundamentals
A cat does not do a physical computation before jumping onto a high wall.
We do not do 'grammatical computations' before speaking.
A robot which could speak might be astonished to be shown its circuitry and programming. How could it map its thoughts to this organised equipment? (It could not have been programmed to do this.)
A cat could make nothing of the physics associated with wall jumping. Why do we expect to be able to understand the 'grammabiology' of speaking? (We are puzzled that we cannot say what it is without talking nonsense.)
What is needed here is not 'more science' but a better understanding of what science is.
We think that reconciling our machinery, our phenomenological condition, and what we can intelligibly say is a kind of amalgam of anatomical, psychological and computational tasks. We don't even want to think about how different these conceptions of ourselves are, never mind explore the consequences of them all being, themselves, explorations of what can intelligibly be said.
We might want to say there is some kind of mystery here - a ghost, 'something of which we cannot speak' - but even this is a misunderstanding. The only purpose of trying to point at these things is to reveal the limits of pointing. Mechanisms of the spirit are still mechanisims; ghosts are just hazy sorts of people; things of which we cannot speak turn out to be things we can say something about - that we cannot speak of them. Incomprehensibly.
There is not some secret, especially deep, kind of explanation that shows our explanations to be reliable.
A paradox is not a puzzle, nor is it a special kind of road sign. It isn't even a place where the road comes to an end. It is like the twist in a moebius strip - we feel our way along it and find ourselves in a place we cannot understand, because to 'undrestand' is to draw a map, and no map can be drawn of this surface.
We do not do 'grammatical computations' before speaking.
A robot which could speak might be astonished to be shown its circuitry and programming. How could it map its thoughts to this organised equipment? (It could not have been programmed to do this.)
A cat could make nothing of the physics associated with wall jumping. Why do we expect to be able to understand the 'grammabiology' of speaking? (We are puzzled that we cannot say what it is without talking nonsense.)
What is needed here is not 'more science' but a better understanding of what science is.
We think that reconciling our machinery, our phenomenological condition, and what we can intelligibly say is a kind of amalgam of anatomical, psychological and computational tasks. We don't even want to think about how different these conceptions of ourselves are, never mind explore the consequences of them all being, themselves, explorations of what can intelligibly be said.
We might want to say there is some kind of mystery here - a ghost, 'something of which we cannot speak' - but even this is a misunderstanding. The only purpose of trying to point at these things is to reveal the limits of pointing. Mechanisms of the spirit are still mechanisims; ghosts are just hazy sorts of people; things of which we cannot speak turn out to be things we can say something about - that we cannot speak of them. Incomprehensibly.
There is not some secret, especially deep, kind of explanation that shows our explanations to be reliable.
A paradox is not a puzzle, nor is it a special kind of road sign. It isn't even a place where the road comes to an end. It is like the twist in a moebius strip - we feel our way along it and find ourselves in a place we cannot understand, because to 'undrestand' is to draw a map, and no map can be drawn of this surface.
Saturday, February 28, 2015
Kaleidoscope
I walk into a room and find a kaleidoscope, which I pick up
and look through.
At first, I see geometrical patterns of coloured
shards. If I manipulate it, turn the
tube, I see different patterns; I find that, to some extent, I can predict how
these will repeat.
Then I see that the coloured patterns are made up of
familiar objects that exist in the room I entered - fragments of furniture, a
window frame. Also tiny fractured and
repeated images of myself.
I change my orientation, and see different parts of the
room, shattered and rearranged, but increasingly recognisable. After a while, I learn to find my way around
the room looking through the kaleidoscope.
I have learned its subtle rules of operation so well that I can actually
see more through the machine than I could before I picked it up. My world enlarges. I form a more complete picture.
Then I use the kaleidoscope to examine its own
workings. I turn it and tip it to
discover more. New patterns emerge, of
utterly engaging complexity; important meanings are suggested and
abandoned. I search for the fundamental
mechanism which makes all this possible.
I peer into the future.
Finally, I turn towards the past.
I see a child walking into a room, and picking up a
kaleidoscope.
I discover that what I have learned is not how to make
pictures, but how to look.
Wednesday, January 07, 2015
Private Languages, again
The argument against the possibility of a 'private' language (a language one knows, oneself, how to speak; but which is never used in a conversation with any other person) is, broadly, that our judgements that we were correctly following the rules upon which the intelligibility of the language depended would be no better than our judgements that we were speaking the language properly in the first place. No internal test of correct usage could avoid relying on judgements which were, themselves, open to questions about correct usage.
Public languages have access to a further resource - the judgements of other language users. A usage which could not be rendered intelligible to these could not be part of the shared language. As Kripke observes, however, this only helps with judgements within the language, and not with judgements about the intelligibility of the language as a whole, where the problem of external validation arises once again - in a fairly catastrophic way. So the PL problem is not just a problem for strictly private languages.
The solution to Kripke's problem is, as I've suggested, to realize that all of our judgements of validity and intelligibility are also moves within a language game - to ask whether the whole game (to include 'all' language games) is valid or intelligible is to ask whether judgements of validity and intelligibility are possible. The answer to this question is either 'yes' or unintelligible.
An interesting question is whether this approach can be taken to private languages as more traditionally explicated (as in the parentheses in paragraph 1). The motivation for asking this question is that something like a private language seems to be required to rescue the idea of intelligible private thought, in so far as we are able, in thought, to reflect on the reliability or our own perceptions and judgements.
One way we do this, of course, is to apply our public conceptions of validity and intelligibility to our private ruminations - either by imagining how they might be articulated in a public language, or by actually articulating them. Our imaginings here are at least as reliable as our imaginings that we can speak ...
The problem with this approach is that it requires the private language to be translated into the public one for testing - so rendering it non-private. The private language can only be private in the required sense if it cannot be so translated, a fact which strengthens the PL argument rather than otherwise, given Donald Davidson's observations on how we determine whether something is or is not a language.
Untranslatable private thought is just part of that 'of which we must remain silent'. Why would we need to worry (somewhat unintelligibly ... ) about its 'intelligibility'?
Only because we project inwards from public intelligibility? We will find what we are looking for if we do this, as Chomsky and Fodor demonstrate. Chomsky finds that anything that looks like a language must look like the language he (more or less) shares with us; Fodor finds that for any internal process to come out as intelligible it must be translatable into the language of his enquiry, which we (more or less) follow.
But if even the mechanisms which underlie public intelligibility need not be rendered as generally reliable (see my last post on the Turing test), why should we expect our phenomenological experience of them to be intelligible?
Of course what we say must make sense, but that making sense is such a struggle suggests that it cannot be just a matter of translation ...
Public languages have access to a further resource - the judgements of other language users. A usage which could not be rendered intelligible to these could not be part of the shared language. As Kripke observes, however, this only helps with judgements within the language, and not with judgements about the intelligibility of the language as a whole, where the problem of external validation arises once again - in a fairly catastrophic way. So the PL problem is not just a problem for strictly private languages.
The solution to Kripke's problem is, as I've suggested, to realize that all of our judgements of validity and intelligibility are also moves within a language game - to ask whether the whole game (to include 'all' language games) is valid or intelligible is to ask whether judgements of validity and intelligibility are possible. The answer to this question is either 'yes' or unintelligible.
An interesting question is whether this approach can be taken to private languages as more traditionally explicated (as in the parentheses in paragraph 1). The motivation for asking this question is that something like a private language seems to be required to rescue the idea of intelligible private thought, in so far as we are able, in thought, to reflect on the reliability or our own perceptions and judgements.
One way we do this, of course, is to apply our public conceptions of validity and intelligibility to our private ruminations - either by imagining how they might be articulated in a public language, or by actually articulating them. Our imaginings here are at least as reliable as our imaginings that we can speak ...
The problem with this approach is that it requires the private language to be translated into the public one for testing - so rendering it non-private. The private language can only be private in the required sense if it cannot be so translated, a fact which strengthens the PL argument rather than otherwise, given Donald Davidson's observations on how we determine whether something is or is not a language.
Untranslatable private thought is just part of that 'of which we must remain silent'. Why would we need to worry (somewhat unintelligibly ... ) about its 'intelligibility'?
Only because we project inwards from public intelligibility? We will find what we are looking for if we do this, as Chomsky and Fodor demonstrate. Chomsky finds that anything that looks like a language must look like the language he (more or less) shares with us; Fodor finds that for any internal process to come out as intelligible it must be translatable into the language of his enquiry, which we (more or less) follow.
But if even the mechanisms which underlie public intelligibility need not be rendered as generally reliable (see my last post on the Turing test), why should we expect our phenomenological experience of them to be intelligible?
Of course what we say must make sense, but that making sense is such a struggle suggests that it cannot be just a matter of translation ...
Saturday, January 03, 2015
The Turing Test
The Turing test captures the ambiguities of 'intelligent', which is one of the reasons it is so compelling.
One of these is that it leaves moot the relevance of the internal structure of the talking machine. This is just as well given the Kripke/Goodman paradox, and open question problems. If the structure of the machine was relevant - if the machine need not only pass a specific finite test, but would also be required to pass all imaginable future tests of the same kind, then we'd be in trouble.
The OQ question here is obvious - that the programing of the machine would have to embody a general language competence engine, and therefore a theory of truth for the language the machine spoke.
The Kripke/Goodman problem is worth a further comment, though. Clearly a machine that incorporated a flawed language generation program could pass the Turing test so long as the specific test circumstances did not reveal the flaws. Would we say here that the test was imperfect?
If we did, then we would (as the paradox suggests) have to say that we could not conclude that any potential interlocutor was really an interlocutor (capable of intelligent conversation), and so that we could not conclude that any conversation was possible. So far so Kripke, and I've already explained the problem here: if we can't say anything we can't articulate the paradox either, so there must be some other way of thinking about this.
Since we know that we can talk to one another, we don't need to worry about how, or why our words make sense. In fact, we can only consider these questions if we already know that their answers can't 'legitimize' our use of language.
What this means is that we don't need to worry about whether we are all imperfect, accidental, Turing 'intelligences'. What counts as intelligence is not intrinsic to the machine, but captured, instead, in the language game in which any question about 'intelligence' can arise - and this neither can be, nor needs to be, modeled in some mechanistic way. It is the game within which formal modeling takes place, but which cannot, itself, be modeled (in its entirety).
So this aspect of the test's ambiguity cashes out nicely: what counts as an intelligent machine is a judgement we make within a game which itself provides the unarticulable context for our judgments of intelligence, and of rationality.
One of these is that it leaves moot the relevance of the internal structure of the talking machine. This is just as well given the Kripke/Goodman paradox, and open question problems. If the structure of the machine was relevant - if the machine need not only pass a specific finite test, but would also be required to pass all imaginable future tests of the same kind, then we'd be in trouble.
The OQ question here is obvious - that the programing of the machine would have to embody a general language competence engine, and therefore a theory of truth for the language the machine spoke.
The Kripke/Goodman problem is worth a further comment, though. Clearly a machine that incorporated a flawed language generation program could pass the Turing test so long as the specific test circumstances did not reveal the flaws. Would we say here that the test was imperfect?
If we did, then we would (as the paradox suggests) have to say that we could not conclude that any potential interlocutor was really an interlocutor (capable of intelligent conversation), and so that we could not conclude that any conversation was possible. So far so Kripke, and I've already explained the problem here: if we can't say anything we can't articulate the paradox either, so there must be some other way of thinking about this.
Since we know that we can talk to one another, we don't need to worry about how, or why our words make sense. In fact, we can only consider these questions if we already know that their answers can't 'legitimize' our use of language.
What this means is that we don't need to worry about whether we are all imperfect, accidental, Turing 'intelligences'. What counts as intelligence is not intrinsic to the machine, but captured, instead, in the language game in which any question about 'intelligence' can arise - and this neither can be, nor needs to be, modeled in some mechanistic way. It is the game within which formal modeling takes place, but which cannot, itself, be modeled (in its entirety).
So this aspect of the test's ambiguity cashes out nicely: what counts as an intelligent machine is a judgement we make within a game which itself provides the unarticulable context for our judgments of intelligence, and of rationality.
More on emergence and heuristics ...
Some more thoughtful thoughts on emergence:
Of course a general theory which drags in 'emergence' must be false, but we shouldn't leap to conclusions here. The important question is why we need that particular general theory, what it does for us. This can be a hard question to address, because addressing it requires us to decode some of our fundamental heuristics - many of which will have acquired an almost metaphysical status, so that relinquishing or revising them will have disturbing emotional consequences.
If we believe, for instance, that a rational science is only possible if the world can be reduced (at least in principle) to mechanism, then we will hang on to emergence to retain our rationality. (The thought that this, itself, may be an irrational strategy, will be almost too terrifying to entertain.)
Also, spooky alternatives to emergence simply don't explain anything. The ghost in the machine is just another place-holder for ignorance, or evidence of error - perhaps about the kind of machine, or about what can be done with mechanism, but error nonetheless.
The way forward is not via some more comforting metaphysics, however. (All metaphysical accounts will also be place-holders, if something like what I'm saying turns out to be correct).
The way forward is to start with the irreducibles of account giving, which include the possibility of accounts and of the givers of accounts.
Of course a general theory which drags in 'emergence' must be false, but we shouldn't leap to conclusions here. The important question is why we need that particular general theory, what it does for us. This can be a hard question to address, because addressing it requires us to decode some of our fundamental heuristics - many of which will have acquired an almost metaphysical status, so that relinquishing or revising them will have disturbing emotional consequences.
If we believe, for instance, that a rational science is only possible if the world can be reduced (at least in principle) to mechanism, then we will hang on to emergence to retain our rationality. (The thought that this, itself, may be an irrational strategy, will be almost too terrifying to entertain.)
Also, spooky alternatives to emergence simply don't explain anything. The ghost in the machine is just another place-holder for ignorance, or evidence of error - perhaps about the kind of machine, or about what can be done with mechanism, but error nonetheless.
The way forward is not via some more comforting metaphysics, however. (All metaphysical accounts will also be place-holders, if something like what I'm saying turns out to be correct).
The way forward is to start with the irreducibles of account giving, which include the possibility of accounts and of the givers of accounts.
Subscribe to:
Comments (Atom)