A machine is not just an object - or even a set of 'processes' (as in a computer program). For it to be a machine it must, in addition to this, be accompanied by a promise. For human-made machines the promise is from the constructor: 'I promise this will fulfil its function'. For 'natural' machines the promise is from the theoriser - the person articulating the hypothesis that the object is a machine - and it is a promise that this person knows how the machine works; knows what its 'function' is.
The 'stability' of machines is therefore a projection of our belief in semantic stability. We can only make promises if we know what we are saying, and we can only know what we are saying if, to some minimal extent, words mean the 'same' from one occasion of use to another.
This is one of the reasons why the 'fact' that we can construct machines and the fact that we can talk about the world are really the same fact.
Most machines are, of course, a bit 'unreliable'. Their accompanying promises are not always fulfilled. We can only make adjudications about this in a working language. (Collapsing bridges - which appear in many 'reality' diatribes - can only be identified through this kind of adjudication.)
What this means is that semantic stability underpins the identification and appraisal of 'machines'. Which, in turn, means that no 'mechanistic' account can be given of semantic stability.
(We should know this, of course, from the Goodman/Kripke paradox.)
We certainly can't intelligibly articulate a speculation that semantic stability is some kind of 'illusion' either ... nor even the related speculation that we might wonder about this privately.
So much, by the way, for mechanistic determinism. Which seems like a fairly trivial corollary, under the circumstances ...
No comments:
Post a Comment