impossible
Post date: 05-Mar-2012 01:40:03
Arguments have been presented to the effect that the creation of a thinking machine is not just very difficult, but inherently impossible. It seems prudent to consider at least one of these arguments lest we embark on an impossible quest. Roger Penrose has presented a lengthy argument to this effect (Penrose 1989 & 1994) which was heralded as “... the most powerful attack yet written on strong AI” (Gardner in foreword to Penrose 1989). Proponents of strong Artificial Intelligence broadly hold that all thinking is computation.
The first part of Penrose's argument is constructed using a technique by Godel. As a first step, Penrose defines an algorithm for a Turning machine (the P.algorithm say), that will accept any Turing machine algorithm as input. If the P.algorithm halts, then it is a reliable indication that the input algorithm will not halt. When a result from an algorithm is thus reliable, then it is said to be “sound”. Penrose goes on to supply the P.algorithm as the input to the P.algorithm itself. In this situation, if the P.algorithm were to halt then we would have a reliable indication that the P.algorithm does not halt – a contradiction. Hence, we must conclude that the P.algorithm does not halt. This is OK because the P.algorithm says nothing if it does not halt; it is only a reliable indicator when it does halt.
As the second step to the argument, Penrose imagined that the P.algorithm contained all of the reliable techniques that any mathematician, past and future, might convert into an algorithm and employ to indicate if a Turing algorithm will halt.
The crux of Penrose's argument is this: By concluding that the P.algrorithm does not halt when applied to itself, a mathematician is able to indicate something that the P.algorithm was unable to. This is despite the fact that the P.algorithm contains all algorithms available to mathematicians. A contradiction. From here, Penrose draws the “inescapable” conclusion that:
“Human mathematicians are not using a knowably sound
algorithm in order to ascertain mathematical truth.”
Penrose 1994
Penrose goes on from here to conclude that:
“...there is something essential in human understanding that
is not possible to simulate by any computational means.”
Penrose 1994
However, Penrose's argument is difficult to accept. Consider the following alternate argument built along similar lines. Consider that Professor Penrose himself is given the task of examining Turing machine programs and stating if they will not halt. P.Penrose is at liberty to employ the full measure of his insight or that of any other colleague and is provided with a (very rare) appendum containing all past and future mathematical proof techniques. P.Penrose must work continuously on each submitted Turing program until he can reliably state that a Turing machine running the submitted program will not halt and then promptly advise the submitter accordingly. Furthermore, P.Penrose is not allowed to deliver any other result to the submitter.
A Turning machine is then programmed to do four things in sequence:
send the following question to P.Penrose: “Will a Turing machine with the programming (attached) of the Turing machine submitting this question not halt?”
print out the following statement: “If P.Penrose finds that a Turing machine with this programming will not halt then this Turing machine will halt. This is a contradiction, therefore a Turing machine with this programming will not halt.”
print out the natural numbers in an ascending sequence beginning from 1 until a response is received from P.Penrose.
halt.
Should we now go on with Penrose's argument to conclude that there is something essential in the operation of a Turing machine that is not possible to simulate by P.Penrose? I think not.
In Penrose's argument, the human mathematician is always able to indicate something extra because the mathematician has been placed in a superior position. The mathematician has the privilege of considering the operation of the Turing machine from the “outside”, and this is withheld from the machine.
Computers exist on an exclusive diet of symbols. Humans write computer programs which define a set of parameters and an algorithm by which to manipulate the values supplied for each parameter. The human 'understanding', above and beyond symbol manipulation, is in the validity of the algorithmic relationship between the particular parameters of the function. Consider the familiar formula f=ma from Newtonian physics which relates force, mass and acceleration. This formula represents a simple algorithm with three parameters. The 'understanding' of the algorithm is in the pragmatic reality that manipulating mass and acceleration 'just so', will result in something useful. Performing the identical manipulation with mass and temperature simply is not useful. The possibility for 'understanding' is withheld from a symbol manipulator, be it an intelligent person, a million transistors, or Babbage's difference engine; not because of the internal components, but because of the role it has been assigned.
Consider another perspective on Turing machines; a Turing machine is intended to halt when it arrives at an answer, while life is characterized by its continuance. When life halts it takes its consciousness and its intelligence with it. Turing machines appear superficially to be dynamic in their operation. However, the Turing machine is completely isolated from the environment for the entire period between the start and the stop, at which time it delivers a static answer. It matters naught if the time is long or short, or if the machine is fast or slow in its operation, the end result is, by definition, static. Intelligence is a dynamic phenomenon which is useful until it halts while a Turing machine is a device which is useful when it halts.
Not impossible.
Penrose,R. (1989), The Emperor's New Mind. Vintage.
Penrose,R. (1994), Shadows of the Mind. Oxford University Press, Oxford.
Turing, A.M. (1936) On Computable Numbers with an application to the Entscheidungsproblem.