Thursday, December 11, 2008

The language of thoughts: intelligence, artificial intelligence and super natural intelligence

I wanted to write this post a while ago and then I wanted to write a philosonomicon for my thesis. Because of the deadline yesterday I finally finished the prologue and am using some of it for this post.

Scott already blogged about this almost over two years ago. The basic standard of modern science is that communication is a key aspect in any analytical ability or intelligent activity since it is the first causal step in achieving repeatability leading the eventual goal of our persistence. For communication we need material. Hence materialism is one of the foundations the modern science. Languages evolve over time and the more precise one can communicate the better will be the ability to learn, behave intelligently, lead the food chain and hence survive through odds. Since the beginning of enlightenment era around late 17th century and early 18th century "reason" began to be the basis of authority. Hence the quest to understand human intelligence can be thought to have begun since then.

A hypothesis that is very plausible is that all actions are seeded in thoughts. Our understanding of human intelligence will be limited by our understanding of the language of thoughts. Hence the pursuit of understanding the language of thoughts is well justified.

Understanding Nature for engineering purposes has to do with what is observable, measurable and repeatable with certain level of predictability. Even though logic and math existed for centuries before the advent of computers, a new era of science was ushered by efforts of Alan Turing and Alonzo Church who came up with the famous Church-Turing thesis. Turing's famous paper "On computability..." essentially can be thought of as a heuristic explanation of the human thought process which has proven to be very successful. Thus Turing proposed a language for the machine human interaction. The book "The Language of Machines...", co-authored by my Masters adviser should provide a nice introduction and perspective to the theory of computability.

Turing's famous test for Artificial Intelligence involves modeling thought process as a computational device and comparing it with behavior of a human. Even though computational model of human thought process is only a heuristic it nevertheless models many many human thought processes. With the advent of complexity theory it became apparent that computation may not efficiently model all thought processes as argued by works like Scott's thesis.

But as Niels Bohr once said, "It is wrong to think that the task of physics is to find out how Nature is. Physics concerns what we say about Nature." efforts in artificial intelligence also concern themselves with what we can say about intelligence not what it is. This lead to the modern approaches in machine learning. To summarize the efforts in one long sentence would be to say that those are essentially based on principle of Occam's razor to explain the unknown distributions of observed data using different efficient algorithms and models of data. Different sub-fields (like vision, bio-informatics, medical imaging, robotics etc.) essentially involve in coming up with those efficient algorithms and data modeling. For e.g. artificial vision deals itself with understanding data obtained using electromagnetic waves starting from X-rays (CT-scans) to visible light (photo cameras) to radio waves (MRI). Goals in artificial vision and intelligence thus don't necessarily restrict themselves to human abilities. Humans just form the lower bound of what we want to do with machines (like for e.g. super-man has X-ray vision).

While applied researchers work on developing artifical intelligence that can mimic natural and super-natural intelligence assuming a computational model, complexity theorists work on building computational models and shedding light on their limitations. As long as P!=NP we have to design the learning algorithms for the machines to learn and behave. Understanding the way human brain really works may help to come up with better ("generative") computational models that actually mimic the underlying process generating the human intelligence. But it's a long way to go. Right now Turing model is the most promising because of the enormous creative abilities of humans in designing clever algorithms. Evidently even quantum computation models of thoughts do not give us much more power.

No comments: