We use 'linguistic tokens' in two distinguishable ways:
(1) In accordance with pre-existing grammatical and semantic rules.
(2) Experimentally - by exploring new possible rules and uses.
It is possible to imagine building a definite model of (1) for some particular state of our language use, but it is far from clear that modeling (2) is possible.
It is, in other words, possible to imagine programming (1), but not (2).
Human intelligence does both, and while there might be some sense in which a sophisticated self-learning algorithm might also do both, our grounds for thinking it might do this in a 'human-like' way are very weak. We don't have the appropriate kind of understanding of human intelligence to build an algorithm that can be guaranteed to behave like a human being, that would never diverge in some monstrous way.
Nature, after all, struggles to avoid monstrosity: and nature works more slowly, and has fewer combinatorial options available to it, than our AI experimenters do.
Turing avoided this difficulty by putting 'human intelligence' on both sides of the equation - we can't specify it, but, as human beings, we know it when we see it.
Until we don't ...
No comments:
Post a Comment