How modern computers come to be and what's left?
'A computer deserve to be called intelligent if it could deceive a human into believing that it was a human" - Alan Turing
Philosophy of mathematics
The geometry you did in high school is the virtue of one man- Euclid of Alexandria. The man, in 300 BC, introduced five postulates and single handedly built the field of geometry on them. His work was so elegant that it took us 20 centuries to realize that tweaking of one of his postulate can lead to new ideas, what we now call the ‘non-Euclidean’ geometries. Following the philosophical footsteps of Euclid, David Hilbert introduced a program which was intended to be axiomatization of, not just of the new geometries, but of all of mathematics. It was certainly the hardest task a man had ever taken.
What real maths look like
The mathematicians working on the program needed to formalize the notion of ‘effectively calculable functions’ to describe what a ‘mathematical proof’ means. They had intuitive sense that the method or procedure to calculate such functions had to be mechanically feasible. All the naturally occurring functions: polynomials, rationals, trigonometrics, exponentials would fall under this class because if there is a natural phenomena related to a function, one could devise a calculable machine based on the physics of it. Beside these, there could be numerous other functions, some even unknown to mankind. To encapsulate all of them is a daunting task. The problem is essentially this:
If you give me a function f, I will, probably, tell you whether f should lie in the class S or not.
Give me a concise definition of the class S.
Such problems are what they encounter in ‘real life’ mathematics which gives you a lot of freedom to exercise your creativity. It is, in fact, exactly opposite to the ‘university’ way of doing mathematics:
Here is the definition of class S.
Will you tell me whether the function f falls on the class S?
I felt proud when I proved sin(x) to be continuous at x = 0 using ϵ-δ definition of continuity. But real genius was the guy who came up with it.
Now there are probably two ways you could approach this problem:
Define a class S and claim the procedures that calculate functions in S to be ‘effective methods’
Construct an ultimate mechanical machine ‘M’ and claim functions calculable by M to be ‘effectively calculable’
Rise of two young men
A young Kurt Godel (o with double dot) preferred the first one. He formalized the definition of the class of general recursive functions: the smallest class of functions that would capture the intuitive notion of effectively calculable functions.
Another young man named Alan Turing used the second approach to come up with a mathematical (but mechanically possible) model of an ultimate machine that does nothing but manipulates symbols on a strip of tape according to a table of rules. The set of functions calculable by this machine, now called Turing machine, is precisely the class of general recursive function defined by Godel.
The equivalency of these definitions suggest a mechanical upper limit to the notion of ‘computability’. Note I am emphasizing on the ‘mechanicality’, for some words are in order when Turing machine gets quantum (that’s the story for another letter). Let’s get bold and define computability.
effectively calculable functions (informal notion) = general recursive functions (Godel’s definition) = ‘computable functions’ (formal notion)
effective methods or procedures (informal notion) = turing machine (Turing’s definition) = ‘algorithms’ (formal notion)
The Godel’s and Turing’s ideas act as the bridge between the informal and formal notion of effective calculability. Although this claim of computability has near-universal acceptance, it cannot be formally proven as the concept of effective calculability is only informally defined.
People have experimented with Turing machines, equipping it with multiple tapes, nondeterminism and so on. They could only come up with machines that compute faster or use less memory but they cannot compute more powerfully (i.e. more mathematical functions). Thus, the claim still stands, and will very likely keep standing.
Modern computers
The ‘Universal Turing machine’ is a variant of the original Turing machine that simulates an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input to that machine from its own tape.
The Universal Turing machine seems to compute all that is possible. So, solving problems would be easier if people had such machines in their offices. Unfortunately such machines are hardly feasible, even if you could build them, programming such machines would be no easy task. In this context, programming would be to encode a Turing machine into a tape and feed it to the Universal Turing machine.
Turing himself and many authors investigated equivalent machines that could be brought into practice. Building upon the work on Turing, John von Neumann devised the concept of a stored program machine. As far as computational capability, Turing machines and von Neumann machines are equivalent. Either one can emulate the other. The modern digital computers are just sophisticated von Neumann machines. Charles Babbage is said to have anticipated the idea a century earlier.
So, if any, modern computer science has three full fathers.
Non-computability
The natural question to ask is ‘what are those functions that are out of reach of mechanical computation?’. The answer is the following decision problem:
f(M, w) = Given a Turing machine M, will it halt on the input w?
This function and other functions reducible to it are uncomputable. That is to say, there is no algorithm which will tell you whether the given Turing machine will halt on an arbitrary input. You could devise a hypothetical oracle that would possibly answer that question, but again there would be no recipe to construct it mechanically.
Are you Turing complete?
In general when people talk about models that are near the Turing machines, they use following terminologies:
A model is Turing-complete if it is at least as powerful as Turing machines.
A model is Turing-equivalent if it is exactly as powerful as Turing machines.
An example of a Turing-complete system which is not Turing equivalent is Turing machines with oracle access to an oracle for the halting problem. The Turing equivalency or completeness does not necessarily mean that you can build mechanical version of such models. The physically implementable Turing-complete systems are Turing-equivalent, which aids our claim of computability.
Powerful than a Turing machine
The modern digital computers are, in a form of abstraction, von Neumann machines. However, the actual physical devices are built from electronics circuits. The circuit model which consists the multiset of universal gates such as {AND, NOR, NOT} are Turing equivalent and nearer to real computers.
Furthermore, if you allow the circuits to have to be ‘infinite’ they can solve the halting problem. So, infinite circuits are in some sense ‘too powerful’ but they're not realistic.
What else are Turing complete?
To show that something is Turing-complete, it is enough to show that it can be used to simulate some Turing-complete system. For instance, an imperative language like C is Turing-complete if it has conditional branching (e.g., ‘if’ and ‘goto’ statements) and the ability to change an arbitrary amount of memory. If the limitation of finite memory is ignored, most modern programming languages are otherwise Turing-complete. Some softwares are Turing-complete by accident, i.e. not by design include Microsoft Powerpoint, Microsoft Excel, x86 MOV instruction.
Practical computability
Turing machines are also helpful in establishing the notion of effective computability i.e. the set of problems that are solvable practically with limited resources. The resource DTIME, which is the computation time for a deterministic Turing machine, is used to define complexity classes, sets of all of the decision problems which can be solved using a certain amount of computation time. If a problem of input size n can be solved in O(f(n)), we have the complexity class DTIME(f(n)).
The well-known complexity class P comprises all of the problems which can be solved in a polynomial amount of DTIME.
A much larger class using deterministic time is EXPTIME, which contains all of the problems solvable using a deterministic machine in exponential time.
The complexity class NTIME(f(n)) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time O(f(n)).
The NP is the set of decision problems can be solved in polynomial time by a nondeterministic Turing machine in polynomial time or alternatively have proofs verifiable in polynomial time by a deterministic Turing machine.
Million dollar question
Since we have already established that a simple Turing machine can simulate the nondeterministic Turing machine in totality (regardless of the resources), isn’t it obvious that P = NP? I mean you just add the resource constraint into the equation. Rephrasing the question:
Can a polynomially bounded Turing machine simulate the steps of polynomially bounded non-deterministic Turing machine in polynomial time? Or,
Can a problem verifiable in polynomial time is also solvable in poynomial time?
It turns out that our civilization is not yet capable of answering such questions. Nevertheless we have managed to establish a set of specific problems called NP-hard such that, finding a polynomial time algorithm to solve any NP-hard problem would give polynomial time algorithms for all the problems in NP. Hence prove P=NP. An equivalent definition is to require that every problem in NP can be solved in polynomial time by an oracle machine capable of solving a NP-hard problem.
Some NP-hard problems are themselves in NP, such are called NP-complete, like the infamous 3SAT while there are decision problems that are NP-hard but not NP-complete such as the halting problem.



