Search This Blog

Saturday, August 17, 2019

Rules, Meaning and Machines

I did a search of this blog just now, looking for 'machine', and find that I've posted a lot more than I thought on the topic. I want to shine a different kind of light on some of the machine stuff by making a suggestion. Or perhaps focusing on it, if I've made it before...

The suggestion is that these two 'facts' are actually equivalent:

(A) We can talk about the world in the way that we do.
(B) We can construct machines.

If it isn't immediately obvious that these are equivalent, think about the following:

(1) It is always possible to re-state a 'natural' law as a kind of grammatical principle - an instruction about how to talk. If we talk in a way which radically ignores a 'true' natural law, we risk becoming unintelligible. By 'always' I mean that it is not possible to articulate a natural law which cannot be re-stated as a grammatical principle.

I am going to suggest that the 'grammatical principle' view of natural laws is the view that is the most epistemologically relevant.

(What I mean by a 'grammatical principle' here is something Wittgensteinian. Something like a 'rule about how to talk', taken very generally.)

(2) Meaning and prediction are strongly linked. Knowing what someone means allows us to make predictions about their behaviour (whether represented semantically or physically). I don't mean (obviously) that we can exactly predict their behaviour, just that if we couldn't draw some conclusions about it from what they appeared to be saying, then we couldn't be said to understand what they actually were saying.

(3) When we make predictions based on 'natural laws' we can be thought of as making predictions based on 'grammatical' rules. We are saying something like: 'If we understand each other now, then we also share some expectations'. If we deny the expectations, we undermine our present understanding. We discover that we were not as intelligible to one another as we thought.

Remember that there is no scientifically or philosophically relevant test which can distinguish between (i) the world having certain characteristics and (ii) our being able to talk as though it has those characteristics (both taken completely literally and generally).

It seems unexceptionable to say that physical laws permit the construction of machines.  What the above outline indicates is that we must conclude from this that grammatical laws (of the right kind) entail the possibility of constructing machines.  When we say 'if you want Y to happen, you should do X', we may as well be saying 'If your behaviour falls under description "X", then the statement "Y" will come to be true about the world'.

These hypotheticals can be enormously complicated, of course - we can construct very complicated machines. But encapsulated in the statements of these hypotheticals are the conceptual machines that we expect our 'real' machines to instantiate. If we are forced to concede that the 'real' machine does not behave according to the blueprint, then the hypothetical is, with qualification, falsified. We might experiment with the bounds of intelligibility here, but only to repair them - not to abandon them.

This process of law discovery - discovery of what it is intelligible to say about the world - and successful machine construction has left us with the 'machine world' metaphor. The metaphor is reinforced by the related 'discovery' that any intelligible description of the world gives it machine-like characteristics. It's hard to step back and see that it is the intelligibility that carries the machine metaphor into 'the world'.

No matter how successful our project of rendering parts of the world intelligible becomes, it gives us no grounds whatsoever for believing that we can render it completely intelligible. This is a corollary of the Goodman/Kripke paradox. The Open Question paradoxes generated by the hypothesis that the world is completely intelligible tell us that not only do we not have grounds for believing that it is, but that there is a sense in which it cannot be.

It's worth saying, here, that an intelligible 'mystery' implies its own sets of mechanisms, so this perspective creates no space for 'supernatural' processes.

It also doesn't permit contradiction or radical incongruence - the scope of intelligible theory does not include incoherence. (By definition).

The boundaries of intelligible theorising cannot be directly drawn, either. We may be able to show that there are incomprehensible numbers, but we cannot do arithmetic with them.

What it does mean, though, is that our effective judgements about these things are, in a sense, parochial. 'Intelligible' means something like this - 'can, in principle, be made intelligible to us now'. Some of our descendants may participate in interactions that we would find incomprehensible, and be able to do things that would seem magical to us. We might be tempted to say 'they speak a language we cannot understand', but this would only make sense if we had some hope of learning it. If we didn't, we'd have no idea what they were doing, or even whether they were properly human.

Any useful conception we might form of a machine world has this same parochial quality. This means that it cannot be 'absolutely' generalisable (in any meaningful way).

To suppose that the 'whole' world can be rendered intelligible in terms of law-like characteristics is to suppose that our intelligible rendering depends on these same characteristics that we are investigating. And this problem is metaphysically agnostic - no additional substances or processes can dissolve it.

But our parochial concerns do have some general consequences.

One is that however our language practices might change, we will never talk in a way which renders the way we presently talk completely unintelligible. It must always be at least a limiting case. This isn't because what we say now somehow binds the future - it's just that we can only count something as a language if it can be rendered intelligible in the language within which we make the judgement: the language we are speaking now.

For this to be the case, we do not need to give some account of  'the language we are using now' much beyond pointing to it. If we want to talk about talking, we are pretty much committed to some conception of what we are talking with.

A slightly more disturbing consequence of this approach is to do with the relationship between conceptual machines and what we might call 'real' machines - the machines we build or find in the world. It's easy to see that the conceptual machine will always provide the normative standard against which the performance of the 'real' machine can be judged. This can't work the other way around - the 'real' mechanism, the constructed mechanism, cannot work as a normative standard. The Kripke/Goodman paradox makes this formally impossible, and the possibility of mechanical failures make it impractical.

This has an obvious consequence - that if we think of a person as a kind of biological machine, we cannot also think of them as providing a reliable normative standard of language use - of meaning. On the other hand, we must attribute something like this to someone we wish to count as a competent interlocutor.

A kind of way around this comes via the recognition that being a competent interlocutor includes being open to correction - being able to learn, when one has made a mistake. The normative standard is provided by the whole language community, in some sense - it is within the shared language that the practical issue of normative standards is resolved. And this doesn't give rise to any of the formal problems hinted at above, because the shared language is a context of adjudication, rather than a machine whose performance we might adjudicate on. We can think of a person, an interlocutor, as a kind of correctable machine; but we cannot think of the whole of the linguistic community in this way.

No comments: