If you lived in a 'Matrix World', and another person in that world said to you: "Do you know that we live in a Matrix World"?, how would you be able to work out what they meant?
If the world you appeared to be in was a perfect simulacrum of a "Real" world, there would, ipso facto, be no test that could reveal this. If 'We live in a Matrix World' and 'We do not live in a Matrix World' cannot be distinguished on the basis of some kind of validity criteria, then they also cannot be shown to be semantically distinct. We resist this idea in fictional narrative of course, because there always is a semantic distinction - even if it rests on criteria external to the narrative.
(This may seem to have a 'positivist' flavour, but I am not making any statement about the kind of validity criteria that would be relevant - I am just saying that there should be some.)
In the film, of course, and in other science fiction explorations of this theme, there is a 'way out' of the Matrix World. There is a way to test the simulacrum. But there is no way to test whether the proposed test is a cunning device of the simulacrum, whether the red pill simply triggers another set of 'reality' programs, designed to deal with certain kinds of inconvenient metaphysical speculation.
There are plenty of cases, of course, of metaphysical diseases turning out, after all, to be diagnostically distinguishable - see any physics textbook that refers to Democritus. In these cases, we often find that the archaeological interpretations turn on disputable semantic equivalences. Would Democritus recognise modern atomic theory as a descendent of his speculations? (The answer is, almost definitely, "no" ...)
I don't want to get lost in the semantic woods, however, because there may be a more interesting aspect of the 'Matrix World' metaphysic that has some very deep and general consequences.
A computer is a machine which can manipulate the physical representations of logical symbols in a way which mimics the syntactical rules which determine the use of these symbols in a logical script. In other words, it is a machine in which purely mechanical interactions are organised so that they mimic purely syntactical ones.
It can do this so well, that it can appear to have a kind of 'semantic awareness'. (Soon, we will have machines which will appear to make this claim about themselves, just as we do.)
If we set aside the possibility of demonstrable emancipation (a 'red pill' exit), then we must imagine a possibility in which everything is simulated (including our discovery that we have real biological bodies that are being kept in a giant incubator.)
I'm going to duck a lot of complicated issues here by making a proposal: that the central thing that needs to be simulated, in such a world, is our conviction that we are talking to one another about a real world that we all inhabit. In other words, our experience of ourselves as language users. This may seem to leave out a lot of private phenomenology, but then so do our actual linguistic exchanges - our beetles stay in their boxes. For the purposes of exploring some of the issues raised by the Matrix World hypothesis, I'm going to think of the language we are using as a kind of script that could, in principle, be generated by a computer following syntactical rules.
I might as well say, here, with a nod to Liebniz, that all that is really needed is a simulation that convinces me that I live in a world with other language users, and that I am having this conversation with (one or two of) them. I'll pass over this possibility, since it generates the same issues as a more general (and perhaps more friendly) simulation would.
I'm also going to pass over all of the problems associated with simulating 'consciousness'. I have two reasons for doing this - one is that we don't yet have a clear enough understanding of what we mean by 'consciousness' to be able to use the word in this context, and the other is that I plan to drop 'meaning' altogether for a while, and consider the model in purely syntactical terms. This renders 'consciousness' as a syntactical construct - a string or characters, if you like - whose use is determined by the way the Matrix-support machine operates. I don't think I'm doing any harm to my central argument here, although - again - I realise I'm walking past some doors that look open ...
So: lets think about the Matrix machine, which I will now represent as an 'adequate' syntax manipulator, because it feeds me strings of terms, and responds to the terms I feed it, in a way that convinces me that I am having a conversation with someone (as in John Searle's 'Chinese Room' experiment). It doesn't matter here that I am 'part' of the simulation (this is one of the meaning ambiguities of 'we live in a Matrix World'.) What matters is that there is something going on that works as a convincing simulated conversation. This must be possible since we have to have some vehicle for our speculations - a language in which we can have philosophical discussions about Matrix Worlds, among other things.
Obviously, a minimum requirement for implementing a language simulation is a syntax simulator. And this is all that can be expected of a Matrix computer. Since such a machine 'contains' the whole world, even a simplistic reference semantics would be reproduced by the machine's internal processes, and so reduced to syntax.
Now it doesn't really matter very much what kind of ontological status we give the underlying machine - whether we think of it as a physical machine (with causal interactions), or as some kind of abstraction (per Max Tegmark) or as comprising some 'non-physical' support substance (a la DesCartes). In all cases, we have one kind of interaction (causal, formal, or spectral) being organised to mimic (to be isomorphic to) another, strictly syntactical, one. We have a 'something in the world' which is able to convince me that I am having an intelligible conversation with someone, when the only 'evidence' for this conversation is a string of physical symbol representations, a string of symbols (as an abstraction), or a more mysterious 'mechanism of the mind' that directly produces this conviction in me.
Another open door I have to acknowledge here is that all of these routes to explanation generate open question paradoxes. I've talked about this at some length already in this blog. No substrate can guarantee the truth of any statement it generates, since our articulation of the substrate as a reliable truth-generator is a computation in the language we wish it to validate. The substrate simply becomes part of a paradox-generating 'theory of truth'. I am walking past this door for the moment because the open question paradox it generates is a semantic paradox. It depends upon a semantic concept of truth.
The real power of Gödel's paradoxes is that they are syntactical paradoxes. They are generated by the capacity of an entirely formal system (in so far as such a thing can be conceived) to model its own theorem determination processes. They appear, for example, when we use arithmetic to determine which of two sets a Gödel number should be allocated to, when the number encodes the arithmetical mechanism for allocating itself to one of the sets in such a way as to frustrate any such allocation. (This is why it comes out as a version of Russell's paradox).
Which is also, I guess, why Gödel's proof theorems and Alan Turing's computability results are equivalent. My Matrix generating Turing machine which passes the 'Turing test' (for intelligibility, perhaps, rather than intelligence) must also generate paradoxes when it is set to compute certain characteristics of its own behaviour, because these computations must be possible to represent in the 'language' that the Matrix denizens speak. This is at the centre of what I'm trying to do, however coarsely.
There is nothing that can count as a computation, but (at the same time) not be possible (in principle) to represent in 'our' language (the language of the conversation I am having with you). We know this from the general nature of a Turing machine, which can emulate any other computing machine, and so emulate any set of syntactical rules. Since we can model a Turing machine using our language, we can also (in principle) model any set of syntactical rules in our language - including the rules of the 'language/world' composite that the Matrix machine must instantiate.
If the Matrix machine must (as a minimum) be able to reproduce the syntax of a language/world combination in a way which is semantically convincing to the 'language user' components (us) of its simulated world (to be a bit brutal with the metaphysics), it must do this by following machine rules which mimic syntactical structures which can be represented and reproduced within the language of these simulation components (us, in other words). This is the only way that its mechanisms could count as mechanisms of a Matrix machine. We may 'explain' our own existence by referring to a mysterious deity (perhaps having a dispute about whether this is an explanation), but we cannot call the Matrix machine (essentially) 'mysterious' without immediately conceding that it is not a machine.
Where this leads us is to something like this: the Matrix hypothesis, in order to be intelligible, requires the machine which generates the simulacrum to work according to intelligible rules; in fact to syntactical rules which can, in principle, be formulated in human language - perhaps in many hundreds of billions of lines of computer code, explicitly mapped to described physical states of the machine, plus a description of its physical processes that demonstrates its reliability as a syntax emulator. And this must be reconciled with the fact that the central illusion that the Matrix World maintains is exactly that we have a language in which we can speculate about things like whether we live in a Matrix World. Remember that it must be true that we can speculate thus, because that's what we're doing now.
This might be a long-winded way of saying something like this:
The hypothesis that we live in a Matrix world contains the hypothesis that this world is supported by a humanly (in principle) intelligible machine - i.e. a machine whose operations could (again, in principle) be described in human language. The exchange of tokens which comprises this language, and is the syntactical substrate of our semantic hypotheses, is determined by the operations of the machine. These operations, on the other hand, are only intelligible if they can be represented within the language we 'speak' by exchanging these tokens ...
Where does this leave us?
When I first started to think about this, I thought I would be able to show that the Matrix World hypothesis, and by extension, all other 'metaphysical substrates' would turn out to be not only semantically incoherent (as various forms of the open question paradox show them to be) but also syntactically incoherent, because the rules of the matrix machine would have to be 'representable' in the physical language tokens that its inhabitants exchanged when they spoke to one another - including when they said things like 'We live in Matrix World ...'.
But now I'm stuck, and it's because there isn't a method of producing this representation itself in purely syntactical terms. There is no equivalent of the 'Gödelisation' of arithmetic for language in general, because the Gödelisation of arithmetic depends upon a semantic framework. While we can formulate the rules of logic in arithmetic in a way which allows us to calculate arithmetical isomorphs of logical theorems, I don't think we can represent the fact that we can do this in the logical or arithmetical scripts. And this is what would be required to show that the Matrix world hypothesis is not only semantically paradoxical, but also syntactically incoherent.
If anyone out there thinks this makes sense, and can either see a way forward or confirm my suspicions, please get in touch.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment