Artificial Intelligence: Impossible?
February 1, 2010 19 Comments
Artificial Intelligence is impossible because computers will never be able to think and behave in the same way as human beings.
Artificial intelligence (AI) is a young interdisciplinary field of research that combines cognitive science and computer sciences. A good general definition of its aims was made by Professor Aaron Sloman in Computers and Thought (1989, MIT Press): “AI is a very general investigation of the nature of intelligence and the principles and mechanisms required for understanding or replicating it.” This essay aims to make a critical analysis of the title, taking into consideration any relevant views held by experts in the AI field. It also aims to illustrate some of the major philosophical stumbling blocks that occur in the arguments.
AI is a field of research that has captured the public eye. If AI were possible to the standard of human intelligence it would have a massive impact on our society and lives in general. Consider that at present automation is limited to repetitive, mundane tasks and this alone has slashed the number of jobs in industry. Then consider the advent of automated systems which have intelligence: they could be used in literally any niche of presently human-based employment. Some of the issues AI raises open a “Pandora’s box” of controversial arguments, similar to those raised by genetic engineering. For example, is it right for us to attempt to ‘play God’ and create intelligence? If we are able to create an artificially conscious ‘being’, independent of any ‘divine intervention’, what does this infer about the religious issue of Divine intervention in the creation of human consciousness? It is no surprise that the public is tending to avoid the issue by denying its validity point blank.
Ray Kurzweil and singularity institute are having great effort to create self aware cognitive artificial intelligence but would it be ever possible to create such sort of thingy? He suggests so is possible but when it passes through the organic mind, it ignores the possiblity of artificial intelligence with artificial emotions as It’s nature. Of course, we can create real AI only and only if we have knowledge how really brain function. If we would have real working simulation of brain or say just we know how robust brain is? Then we can think of building a prosthetic brain of silicon running with the power of your provided electricity and might even be solar powered. Actually brain function is more mystic than anything could be scientific. We haven’t recognised the real functioning of brain yet. Do we have? Perhaps, not.
But wait, you can’t say that you couldn’t make a prosthetic brain at least as intelligent as a natural bug. You can ignore the technology of which we are owner of. We have technology that could create immature intelligence as what similar to a bug. Can you imagine? Well, It’s time to analyse the level of intelligence what a bug typicaly have. A bug could think:
1. How to take food and manage their food.
2. How to save them? Or say intelligence enough to protect them from hunters.
3. They could detect their prey and they could hunt by attacking on them.
4. They can distinguish between prey and hunter.
5. They can remember their path and can retrieve their track and home.
6. They know how to behave with others.
So, is it harder even to enter in first step to create self aware and at least human like intelligence? We have microchips and super fast processors but can they withstand against neurons of bugs?
Computer hardware does have some significant advantages over biological nervous tissue: these advantages indirectly aid the development of AI. The following points are paraphrased from Roger Penrose in his essay “Setting the scene: the Claim and Issues” from the volume “The Simulation of Human Intelligence” (1993, Blackwell). Firstly, Electronic circuits are already about a million times faster than speed of a nerve cell transmitting an impulse. Secondly, electronic circuits have an immense advantage over brains in terms of precision in timing, and accuracy of action. One major pitfall is that no neural network yet constructed has anywhere near the multitude of synapses (ie connections between neurones) that occur in a biological brain, but this may be overcome in time. Moravec H., in his book “mind children” (1988) makes a very valid point in support of the capability of computer hardware for use in AI. He reminds us that the rate of development of computer technology has been accelerating for the past half century: what basis have sceptics in saying that this rate will drop suddenly?
On the other hand, it must be said that biological nerve tissue (ie the material that makes up the human brain) has advantages over computer hardware, namely the capacity for major error tolerance. This applies in terms of both physical and processing capabilities. If a human brain is damaged it will carry on functioning to the best of its ability- this cannot be said for computer hardware at present. If a problem develops in the coding of a computer’s program, it will either ‘crash’ or output ‘gobbledegook’: the human mind is very error tolerant. Some important advances have been made recently in developing computer hardware and software with capabilities nearer to those of a biological brain and ‘mind’, namely heuristics, neural networks: “models of the logical properties of interconnected nerve cells” (quote from Garnham A.’s Introduction to Artificial Intelligence, 1988) and fuzzy logic.
This discussion about computer hardware leads onto the question raised by Sloman A. in “Computers and Thought” of whether the human mind is purely a symbol manipulator. Computers are purely symbol manipulators, so if the human mind is too then this significantly increases the ease of simulating it on computers. However, there may be other operations the human brain is capable of: for example, non-symbolic operations (possibly emotions) or operations that occur below the level of conventional symbol processing (possibly seeing and distinguishing objects).
The crux of the problem in dealing with issues of artificial intelligence is the definition of the word ‘intelligent’. Obviously, the definition provided by a conventional dictionary is not enough because it would be too vague and non-technical. In “Computers and Thought” Sloman states three key features of intelligence: Intentionality, Flexibility and Productive Laziness. On their own these labels are fairly meaningless, definitions are required. (The definitions below are adapted from those given by Sloman in Computers & Thought)
Sloman states that intensionality is “the ability to have internal states that refer to or are about entities or situations more or less remote in space or time, or even non-existent or wholly abstract things.” this definition includes thoughts or desires about the mind in question’s own state, ie various forms of self-consciousness.
Flexibility is the variety of things intentional states can refer to, for instance the variety of types of goals, objects, problems, plans, actions, environments etc, with which an individual can cope, including the ability to deal with new situations using old resources combined and transformed in new ways.
Productive laziness involves avoiding unnecessary work. In the real world almost every task involves so many choices from so many options that to solve a task by enumerating all the possible actions and outcomes would be extremely wasteful of processing time and power. Lazy shortcuts are required, for example testing partial combinations of options to see whether they can possibly be extended to reach the goal of the task- if not they can be rejected at once. Being lazy in this way is usually intellectually harder, yet faster- and speed of processing (or ‘thought’) in the real world is essential for survival.
Sloman’s “three key features” explained here do seem to make a good summary of some of the prerequisites of intelligence, but he makes mention only of “self-consciousness” rather than consciousness itself. The difference between the two terms is important. Self-consciousness is awareness of one’s internal states, and memory of the internal states which have previously occurred; whereas consciousness is a much broader term. Sloman makes no comment on the issue of whether consciousness is required for intelligence- in doing so he avoids enter the lengthy debate. An outline of this debate is attempted to be made in the following paragraph.
To tackle the issue of consciousness, a technical definition is required- something that AI researchers have been arguing over for quite some time. Even now there are in fact many variations of opinions. For sake of conciseness only two will be considered in this essay. Aleksander (professor of neural systems engineering at imperial college, London) refers to the Chambers 20th Century English Dictionary in giving his opinion of the definition of consciousness: “The Waking State of the mind; the knowledge the mind has of anything”. Aleksander postulates a number of attributes for the “waking state” of the mind: learning, language, planning, attention and inner perception. Searle, on the other hand, argues that consciousness is a natural biological phenomenon that occurs because the brain is not a digital computer but a “specific biological organ”. This is an anti-AI point of view. In other words it states that a simulation of a biological brain is only as ‘real’ as a simulation of a liver or kidney. Penrose (in his previously mentioned essay “setting the scene: The Claim and Issues) makes some important comments about this ‘only simulation’ argument:
“If all the external manifestations of a conscious brain can indeed be simulated entirely computationally then there would be a case for accepting that its internal manifestations- consciousness itself- are also present in association with such a simulation” note that Penrose states that this ‘operational argument’ is not entirely conclusive, yet it does have some considerable force.
Philip Johnson-Laird makes an interesting comment on the terminology of the word ‘consciousness’ in his book “The computer and the Mind”.When riding a bicycle you do not think “I must turn the handle bars so that the curvature of my trajectory is proportional to the angle of my unbalance divided by the square of my speed” these computations are carried out unconsciously. According to the argument ‘intelligence must involve consciousness’ then the process of a human riding a bike is not intelligent, instead the intelligent part includes the way in which the method of bike riding was learnt, together with Sloman’s three key features as previously described (Intensionality, Productive Laziness & Flexibility).
The discussion on consciousness here illustrates a major philosophical stumbling block, but it leads away from the thrust of the statement made in the title. The point made here is that consciousness is not necessarily required for all types of intelligence- it is a term that comprises many different interrelated components and levels. Now the question arises as to whether we can quantify the importance of the different constituents of intelligence. It seems that its constituents are not static and their importance vary according to the task in hand.
The view held in the title of this essay is common amongst lay-people. At present there is no conclusive evidence that an AI system will or will not be capable of reaching a level of intelligence parallel to that of human thought and behaviour: so the view held in the title is not entirely invalid. The one important point that the statement misses is that an AI system is any system that exhibits some form or aspect of intelligence. This has already been achieved in systems carrying out tasks such as reasoning, learning, planning and other functions- all of which are accepted as aspects of intelligence. In this respect, the title can be seen as an incorrect and naive statement.