Universal Turing machines have come to represent the definition of a computation and underpin the most basic architecture of modern computers.

The Turing machine was developed in response to Hilbert's tenth problem (the Entscheidungsproblem) which enquired as to the existence of a general algorithmic procedure for solving mathematical problems.

Universal Turing machines manipulate an ordered list of symbols which represent some 'problem' along with the required algorithm. Hopefully, a Universal Turing machine will halt and provide an ordered list of symbols which represent an 'answer' to the 'problem'. The manipulations are wholly deterministic.

Modern computers are a naive popular analogy of what brains do. Artificial Intelligence research employs computers in a far more enlightened manner. However, there are some aspects of the underlying Turing machine model that cause me disquiet.

**Firstly**, A Turing machine is intended to halt when it arrives at an answer, while life is characterised by its continuance. When life halts it takes its consciousness and its intelligence with it. Turing machines appear superficially to be dynamic in their operation. However, the Turing machine is completely isolated from the environment for the entire period between the start and the stop, at which time it delivers a static answer. It matters nought if the time is long or short or if the machine is fast or slow in its operation, the end result is, by definition, static. Intelligence is a dynamic phenomenon which is __useful until it halts__ while a Turing machine is a device which is __useful when it halts__. These two seem poorly matched. This dichotomy is perhaps simply a different perspective on the distinction drawn by Dennett between the tenacious Cartesian Theatre model and the Multiple Drafts model.

**Secondly**, a Turing machine exists on an exclusive diet of symbols. In a more familiar setting, a function in a computer programming language is defined by the human programmer as a set of parameters and an algorithm. The human understanding of the 'problem' is in the validity of the algorithmic relationship __between__ the particular parameters of the function. Consider the familiar formula f=ma in Newtonian physics which relates force, mass and acceleration. The 'understanding' is in the fact that if mass and acceleration are manipulated 'just so' then the result is something useful. Performing the identical manipulation with mass and temperature just isn't useful. The possibility for 'understanding' is simply withheld from the symbol manipulator, be it a person or a machine. The human supplying the parameters to a function (and deriving 'meaning' from the result) is able to appreciate the analogy between one force-mass-acceleration tupple and another in order to use the result.

**Thirdly**, Gödel's incompleteness result effectively destroyed Hilbert's program but this has not resulted in either the demise of the Turing machine or the emergence of a successor which embraces the troublesome self-reference.

"*The idea that Turing machines do not capture all computation, *

*and that interaction machines are more expressive, *

*seems to fly in the face of accepted dogma, *

*hindering its acceptance within the theory community.*"

**Goldin** and **Wegner** in "The Church-Turing Thesis: Breaking the Myth"

“*The logicians approach to the philosophy of mathematics *

*only covers a small fraction of the territory of modern mathematics*

* – and a rather sterile one at that*.”

"*...Godel dealt formalism a devastating blow.*" pg 137.

"*...the mental procedures whereby mathematitions arrive at *

*their judgements of truth are not simply rooted *

*in the procedures of some specific formal system.*"