Seti’s Hunt For Artificially Intelligent Alien Machines


The structure of deoxyribonucleic acid (DNA), ...

Image via Wikipedia


Searching for extraterrestrial life is extremely abstemious no matter what kind of tactics we are employing to detect signs of extraterrestrial life. Wait, shouldn’t we define our premise of  being intelligent without any kind of  surmised indulgence. The search so far has focused on Earth-like life because that is all we know. Hence, most of the planning missions are focused on locations where liquid water is possible, emphasizing searches for structures that resemble cells of terran organisms, small molecules that might be the products of carbonyl metabolism and amino acids and nucleotides similar to those found in terrestrial proteins and DNA. I’ve written a article, not quite a while back though, ‘Searching For Other Life Forms in Extraterrestrial Environments’ in which I’ve illustrated that life could be a sort of  ‘organized complexity’ that consumes energy, utilized it for some necessary biological/non-biological operations endowed with capability to reproduce ‘itself’  from ‘self’. S o if we really want to alien life forms which are conscious and intelligent, we have to change the view that is mainly inclined to see life only like that is diversed over Earth.

However, life that may have been originated elsewhere, even within our own solar system, could be unrecognizable compared with life here and thus could not be detectable by telescopes and spacecraft landers designed to detect terrestrial biomolecules or their products. We must recognize that our knowledge of the essential requirements for life and therefore our concept on it, is based on our understanding of the biosphere during the later stages of Earth history. Since we only know one example of biomolecular structures for life and considering the difficulty of human mind to create different ideas from what it already knows, it is difficult for us to imagine how life might look in environments very different from what we find on Earth. In the last decades, however, experiments in the laboratory and theoretical works are suggesting that life might be based on molecular structures substantially different from those we know.

It is a relatively simple matter to distinguish between living and inorganic matter on Earth by biochemical experiments even though no formal definition of  life in biochemical terms exists. Experience suggests, for example, that a system capable of converting water, atmospheric nitrogen and carbon dioxide into protein, using light as a source of energy, is unlikely to be inorganic. This approach for recognition of life by phenomenology is the basis of the experiments in detection of life so far proposed. Its weakness lies not in the lack of a formal definition but in the assumption that all life has a common biochemical ancestry.

It is also possible to distinguish living from inorganic matter by physical experiments. For example, an examination of the motion of a salmon swimming upstream suggests a degree of purpose inconsistent with a random inorganic process. The physical approach to recognition of life is no more rigorous, at this stage, than is the biochemical one; it is, however, universal in application and not subject to the local constraints which may have set the biochemical pattern of life on Earth.

Past discussions of the physical basis of life  reach an agreed classification as follows:

“Life is one member of the class of phenomena which are open or continuous reaction systems able to decrease their entropy at the expense of substances or energy taken in from the environment and subsequently rejected in a degraded form”.

This classification is broad and includes also phenomena such as flames, vortex motion and many others. Life differs from the other phenomena so classified in its singularity, persistence, and in the size of the entropy decrease associated with it. Vortices appear spontaneously but soon vanish; the entropy decrease associated with the formation of a vortex is small compared with energy flux. Life does not easily form, but ‘persists indefinitely and vastly modifies its environment. The spontaneous generation of life, according to recent calculations from quantum mechanics [4, 5], is extremely improbable. This is relevant to the present discussion through the implication that wherever life exists its biochemical form will be strongly determined by the initiating event. This in turn could vary with the planetary environment at the time of initiation.

On the basis of the physical phenomenology already mentioned, a planet bearing life is distinguishable from a sterile one as follows:

  • The omnipresence of intense orderliness and of structures and of events utterly improbable on a basis of thermodynamic equilibrium.
  • Extreme departures from an inorganic steady-state equilibrium of chemical potential.

This orderliness and chemical disequilibrium would to a diminished but still recognizable extent be expected to penetrate into the planetary surface and its past history as fossils and as rocks of biological origin. According to a research paper ‘Physical Basis For Detection of  Life'[Nature Vol. 207, No. 4997, pp. 568-570, August 7, 1965.] Chemical detection of life is indeed possible based on equilibrium and orderness. So, how should we search for life(here I’m not considering that this is necessarily a intelligent life).

The distinguishing features of a life-bearing planet  suggest the following simple experiments in detection of life:

A. Search for order.

  1. Order in chemical structures and sequences of structure. A simple gas chromatograph or a combined gas chromatograph – mass spectrometer instrument would seek ordered molecular sequences as well as chemical identities.
  2. Order in molecular weight distributions. Polymers of biological origin have sharply defined molecular weights, polymers of inorganic origin do not. A simple apparatus to seek ordered molecular weight distributions in soil has not yet been proposed but seems worthy of consideration.
  3. Looking and listening for order. A simple microphone is already proposed for other (meteorological) purposes on future planetary probes; this could also listen for ordered sequences of sound the presence of which would be strongly indicative of life. At the present stage of technical development a visual search is probably too complex ; it is nevertheless the most rapid and effective method of life recognition in terms of orderliness outside the bounds of random assembly.

B. Search for non-equilibrium.

  1. Chemical disequilibrium sought by a differential thermal analysis (DTA) apparatus. Two equal samples of the planetary surface would be heated in a DTA apparatus: one sample in the atmosphere of the planet, the other in an inert gas, such as argon. An exotherm on the differential signal between the two samples would indicate a reaction between the surface and its atmosphere, a condition most unlikely to be encountered{ where there is chemical equilibrium as in the absence of life. It should be noted that this method would recognize reoxidizing life on a planet with a reducing atmosphere. This experiment could with advantage and economy be combined with, for example, the gas chromatography mass spectrometry experiment (Al) where it is necessary to heat the sample for vaporization and pyrolysis.
  2. Atmospheric analysis. Search for the presence of compounds in the planet’s atmosphere which are incompatible on a long-term basis. For example, oxygen and hydrocarbons co-exist in the Earth’s atmosphere.
  3. Physical non-equilibrium. A simplified visual search apparatus programmed to recognize objects in non-random motion. A more complex assembly could recognize objects in metastable equilibrium with the gravitational field of the planet. Much of the plant life on Earth falls into this category.


The abundance of n-alkanes from an inorganic source (A), Fischer-Tropsch hydrocarbons, and from a biological source (B), wool wax. The observed abundances (•-•) are compared with normalized Poisson distributions (-) around the preponderant alkanea detection experiments and to the planning of subsequent experiments. Even on Earth whore life is abundant there are many regions, such as those covered by fresh snow, where a surface sample might be unrewarding in the search for life. The atmospheric composition is largely independent of the site of sampling and provides an averaged value representative of the steady state of chemical potential for the whole planetary surface.


Experiments A1, B1 and B2 are the most promising for the development of practical instruments. Indeed, the gas chromatography – mass spectrometry combination experiment and the DTA experiment already proposed for planetary probes are, with minor modifications, capable of recognizing the ordered sequences and chemical disequilibrium discussed earlier. Experiment B2, atmospheric analysis, is simple and practical as well as important in the general problem of detection of life. A detailed and accurate knowledge of the composition of the planetary atmosphere can directly indicate the presence of life in terms of chemical disequilibrium; such knowledge also is complementary to the understanding of other life.

Galactic Clubs

Paul Davies suggests the approach of observational SETI – which tries to detect narrow-band signals directed at Earth by an extraterrestrial civilization — is probably futile, because the existence of a communicating civilization on Earth will not be known to any alien community beyond 100 light years. Instead, he argues “we should search for any indicators of extraterrestrial intelligence, using the full panoply of scientific instrumentation, including physical traces of very ancient extraterrestrial projects in or near the solar system. Radio SETI needs to be re-oriented to the search for non-directed beacons, by staring toward the galactic center continuously over months or even years, and seeking distinctive transient events (‘pings’). This ‘new SETI’ should complement, not replace, traditional radio and optical SETI.  But on second thought, maybe these ideas are not all that fresh. I’ve read these suggestions before in the SETI literature. Indeed, I found most of them cited in his footnotes. Nevertheless we should thank Davies for assembling them in his stimulating and lucid new book.
What are the possible reasons for the “Great Silence”? The following list is of course not original:

1) We are indeed alone, or nearly so. There is no ETI, nor a “Galactic Club” — radio astronomer Ronald Bracewell’s name for the communicating network of advanced civilizations in our galaxy (GC for short).

2) The GC, or at least ETI exists, but is ignorant of our existence (as Davies has once again suggested).

3) We are unfit for membership in the GC, so the silence is deliberate, with a very strict protocol evident, “No Messages to Primitive Civilizations!” Only inadvertent, sporadic and non-repeated signals – for example, the “Wow” signal can be detected by a primitive civilization, with opaque signal content not distinguishable from natural signals or noise.

The first explanation is contrary to the subtext of astrobiology, the belief in quasi-deterministic astrophysical, planetary and biologic evolution. This view of life’s inevitability in the cosmos is a view (or, shall I admit, a prejudice) I heartedly endorse. Most scientists active in the astrobiological research program would support an optimistic estimate of all the probabilities leading up to multicellular life on an Earth-like planet around a Sun-like star.

I happen to be an optimist on this issue too. I have argued that encephalization – larger brain mass in comparison to body mass — and the potential for technical civilizations are not very rare results of self-organizing biospheres on Earth-like planets around Sun-like stars. Biotically-mediated climatic cooling creates the opportunity for big-brained multicellular organisms, such as the warm-blooded animals we observe on our planet. Note that several such animals have now been shown to pass the “mirror test” for self-consciousness: the great apes, elephants, dolphins and magpies, and the list is growing. But if the pessimists concede just one of the millions if not billions of Earth-like planets is the platform for just one technical civilization that matures to a planetary stage, advancing beyond our present primitive self-destructive stage, just one advanced civilization with the curiosity to spread through the galaxy, at sub-light speeds with Bracewell probes to explore and document an Encyclopedia Galactica, then what should we expect?
First, the galaxy should be thoroughly populated with surveillance outposts on a time scale much smaller than the time it took on Earth to produce this cosmically pathetic civilization we call the nearly 200 member nation states of the United Nations, with humanity now hanging under two self-constructed Swords of Damocles: the twin threats of catastrophic global warming and nuclear war.

Second, THEY, or at least their outposts, surely know we exist, since to believe THEY are ignorant of our existence is to assume they somehow bypassed us in their expansion into the galaxy, a scenario I simply find unworthy if not unbelievable for an advanced civilization, especially one in existence for millions if not billions of years. It is important to note that this conclusion is informed by present day physics and chemistry, not a post-Einstein theory that transcends the speed of light.

So we are left with option 3: the aliens are deliberately avoiding communicating with our primitive world. I submit this is by far the most plausible given our current knowledge of science and the likely sheer ordinariness of our chemistry and planetary organization.

Why would we be considered primitive? This should be a no-brainer, even for an Earthling. The world spends $1.4 trillion in military expenditures while millions of our species still die of preventable causes every year. Carbon emissions to the atmosphere continue to climb, even though presently available renewable technologies such as wind turbines exist and are sufficient to completely replace our unsustainable energy infrastructure. As J.D. Bernal once put it, “There is a possibility that the oldest and most advanced civilizations on distant stars have in fact reached the level of permanent intercommunication and have formed…a club of communicating intellects of which we have only just qualified for membership and are probably now having our credentials examined. In view of the present chaotic political and economic situation of the world, it is not by any means certain that we would be accepted.

The technical requirements for a galaxy-wide search are dictated by the size of the radio telescope, with the detection range proportional to the effective diameter of the telescope. A large enough radio telescope situated in space could potentially set meaningful upper limits on the rate of emergence of primitive Earth-like civilizations, without ever actually detecting the leakage radiation of even one ET civilization.  But just how big a telescope is required for this project, and at what cost? Our 1988 paper provided such estimates: a dish diameter on the order of 500 kilometers, at a cost of roughly $10 trillion. Perhaps the cost has come down somewhat (but note the estimate was in 1988 dollars). This is surely a project with a vanishingly small chance of implementation in today’s world. I could only conceive of a demilitarized newly mature planetary civilization, call it Earth-United (Finally!), with any intention of implementing such an ambitious project that has no apparent immediate practical benefits. Then and only then would we successively detect a message from the GC, presumably faint enough to be only detectable with a huge radio telescope in space.

On the other hand, the GC may be monitoring biotically-inhabited planets by remote Bracewell probes that have programmed instructions. Such a probe would plausibly be now hiding in the asteroid belt (as Michael Papagiannis once suggested). If the GC exists, there was ample time to set up this surveillance system long ago. Surveillance probes so situated in planetary systems would send welcoming signals to newly mature civilizations, with the potential for a real conversation with artificial intelligence constructed by the GC, if not reconstructed biological entities. If this proposed surveillance system is absent, we should expect the GC to use highly advanced telescopes to monitor planetary systems that have prospects for the emergence of intelligent life and technical civilizations. These alien telescopes could use gravitational lenses around stars. Planetary system candidates to the GC could expect to receive continuous beacons, but the signals would be very weak or disguised so that they would only be decipherable by newly mature civilizations that just pass the entrance requirements. The problem with this scenario is there would be a fairly long communication delay with the GC, because they would be so far away. Nevertheless, reception of a rich message from the GC is possible. The material and/or energy resources needed for these signals to be recognized must correspond with great probability to a newly ripe mature civilization. Hence, cleverness in itself cannot be the criteria for successful detection and decipherment, otherwise a brilliant scientist on a primitive civilization might jump the GC protocol.

I submit that if we want to enter the Galactic Club, the challenge lies in reconstructing our global political economy. A few minor side benefits should result, like no more war, no more poverty, a future for all of humanity’s children with a substantial proportion of biodiversity intact. We should not expect the Galactic Club to save us from ourselves.

Machine Intelligence

It took until the 17th century for us to reject Aristotle’s vision of a universe where our Sun and the stars revolved around the Earth. Search for Extraterrestrial Intelligence (SETI) Senior Astronomer Seth Shostak points out that up until a century ago, the scientific community believed a vast engineering society was responsible for building an irrigation system on the surface of Mars. Discovering the Martians could, in principle, be done by simply turning an Earth-based telescope in the direction of the Red Planet. Now it seems that our best chance for finding Martian life is to dig deep into the surface in search of subterranean microbes.

Our idea of extraterrestrial life has changed drastically in 100 years, but our search strategies have not kept up. In his  paper “What ET will look like and why should we care?” for the November-December issue of Acta Astronautica, Shostak argues that SETI might be more successful if it shifts the search away from biology and focuses squarely on artificial intelligence. Shostak sees a clear distinction between life and intelligence: he says we should be searching for extraterrestrial machines.

ET machines would be infinitely more intelligent and durable than the biological intelligence that invented them. Intelligent machines would in a sense be immortal, or at least indefinitely repairable, and would not need to exist in the biologically hospitable “Goldilocks Zone” most SETI searches focus on. An AI could self-direct its own evolution. Every new instance of an AI would be created with the sum total of its predecessor’s knowledge preloaded. The machines would require two primary resources: energy to operate with and materials to maintain or advance their structure. Because of these requirements, Shostak thinks SETI ought to consider expanding its search to the energy- and matter-rich neighborhoods of hot stars, black holes and neutron stars.

Shostak further argues that Bok globules are another search target for sentient machines. These dense regions of dust and gas are notorious for producing multiple-star systems. At around negative 441 degrees Fahrenheit, they are about 160 degrees F colder than most of interstellar space. This climate could be a major draw because thermodynamics implies that machinery will be more efficient in cool regions that can function as a large “heat sink”. A Bok globule’s super-cooled environment might represent the Goldilocks Zone for the machines. But because black holes and Bok globules are not hospitable to life as we know it, they are not on SETI’s radar. Machines have different needs. They have no obvious limits to the length of their existence, and consequently could easily dominate the intelligence of the cosmos. In particular, since they can evolve on timescales far, far shorter than biological evolution, it could very well be that the first machines on the scene thoroughly dominate the intelligence in the galaxy. It’s a “winner take all” scenario.

I find Shostak’s claim that alien should be resting in super cold zones that can function a large heat sink, is equally as falsifiable. A machine indeed need a heat sink but only in its premordial age. Since aliens have created such kind of super intelligent machine that can comprehend and interact efficiently with our mysterious universe, it becomes necessary imitate the premise that such machines would more likely be self replicative. A replicative would more likely be resting somewhere near a asteroid belt from where it could get material to survive3 and self reproduce it on to.  A number of fundamental but far-reaching ethical issues are raised by the possible existence of replicating machines in the Galaxy. For instance, is it morally right, or equitable, for a self-reproducing machine to enter a foreign solar system and convert part of that system’s mass and energy to its own purposes? Does an intelligent race legally “own” its home sun, planets, asteroidal materials, moons, solar wind, and comets? Does it make a difference if the planets are inhabited by intelligent beings, and if so, is there some lower threshold of intellect below which a system may ethically be “invaded” or expropriated? If the sentient inhabitants lack advanced technology, or if they have it, should this make any difference in our ethical judgment of the situation?

The number of intelligent races that have existed in the pant may be significantly greater than those presently in existence. Specifically, at this time there may exist perhaps only 10% of the alien civilizations that have ever lived in the Galaxy – the remaining 90% having become extinct. If this is true, then 9 of every 10 replicating machines we might find in the Solar System could be emissaries from long-dead cultures . If we do in fact find such machines and are able to interrogate them successfully, we may become privy to the doings of incredibly old alien societies long since perished. These societies may lead to many others, so we may be treated, not just to a marvelous description of the entire biology and history of a single intelligent race, but also to an encyclopedic travelogue describing thousands or millions of other extraterrestrial civilizations known to the creators of the probe we are examining. Probes will likely contain at least an edited version of the sending race’s proverbial “Encyclopedia Galactica,” because this information is essential if the probe is to make the most informed and intelligent autonomous decisions during its explorations.

SRS probes can be sent to other star systems to reproduce their own kind and spread. Each machine thus created may be immortal (limitlessly self-repairing) or mortal. If mortal, then the machines may be further used as follows. As a replicating system degrades below the point where it is capable of reproducing itself, it can sink to a more simple processing mode. In this mode (useful perhaps as a prelude to human colonization) the system merely processes materials, maybe also parts and sub-assemblies of machines, as best it can and stockpiles them for the day when human beings or new machines will arrive to take charge and make use of the processed matter which will then be available. As the original machine system falls below even this level of automation competence, its function might then be redirected to serve merely as a link in an expanding interstellar repeater network useful for navigation or communications. Thus, at every point in its lifespan, the SRS probe can serve its creators in some profitable capacity. A machine which degrades to below the ability to self-reproduce need not simply “die.”

In my earlier article “More Speculations About Intelligent Self  Replicating Exploration Probes“, I pointed out that such probes would more likely be ‘Postmodified Biological’. It is not to dismay that such probes could go through evolutionary changes and be more intelligent. A consensus is that replicating probes should be manufactured using nanomaterials e.g. catoms while it seems significantly plausible that such probes could , in fact, be biological based on rather different mechanisms-brain can be programmed and a powerful microprocessor and other cyberweaponry could be installed, a kind of cyberbiotic probes, designed in a essence to be able of surviving interstellar radiation. Vivid changes as per accordance to requirements, could be installed to work perfectly and replicate themselves even when there is no cargo halt to get metallic material.

[Ref: Astrobiology Magazine, quotations from Astrobiology Mgazine, Nature Vol. 207, No. 4997, pp. 568-570, August 7, 1965]

Artificial Intelligence: Impossible?

Artificial Intelligence is impossible because computers will never be able to think and behave in the same way as human beings.

Artificial intelligence (AI) is a young interdisciplinary field of research that combines cognitive science and computer sciences. A good general definition of its aims was made by Professor Aaron Sloman in Computers and Thought (1989, MIT Press): “AI is a very general investigation of the nature of intelligence and the principles and mechanisms required for understanding or replicating it.” This essay aims to make a critical analysis of the title, taking into consideration any relevant views held by experts in the AI field. It also aims to illustrate some of the major philosophical stumbling blocks that occur in the arguments.

AI is a field of research that has captured the public eye. If AI were possible to the standard of human intelligence it would have a massive impact on our society and lives in general. Consider that at present automation is limited to repetitive, mundane tasks and this alone has slashed the number of jobs in industry. Then consider the advent of automated systems which have intelligence: they could be used in literally any niche of presently human-based employment. Some of the issues AI raises open a “Pandora’s box” of controversial arguments, similar to those raised by genetic engineering. For example, is it right for us to attempt to ‘play God’ and create intelligence? If we are able to create an artificially conscious ‘being’, independent of any ‘divine intervention’, what does this infer about the religious issue of Divine intervention in the creation of human consciousness? It is no surprise that the public is tending to avoid the issue by denying its validity point blank.

Ray Kurzweil and singularity institute are having great effort to create self aware cognitive artificial intelligence but would it be ever possible to create such sort of thingy? He suggests so is possible but when it passes through the organic mind, it ignores the possiblity of artificial intelligence with artificial emotions as It’s nature. Of course, we can create real AI only and only if we have knowledge how really brain function. If we would have real working simulation of brain or say just we know how robust brain is? Then we can think of building a prosthetic brain of silicon running with the power of your provided electricity and might even be solar powered. Actually brain function is more mystic than anything could be scientific. We haven’t recognised the real functioning of brain yet. Do we have? Perhaps, not.
But wait, you can’t say that you couldn’t make a prosthetic brain at least as intelligent as a natural bug. You can ignore the technology of which we are owner of. We have technology that could create immature intelligence as what similar to a bug. Can you imagine? Well, It’s time to analyse the level of intelligence what a bug typicaly have. A bug could think:
1. How to take food and manage their food.
2. How to save them? Or say intelligence enough to protect them from hunters.
3. They could detect their prey and they could hunt by attacking on them.
4. They can distinguish between prey and hunter.
5. They can remember their path and can retrieve their track and home.
6. They know how to behave with others.

Oh, I have left one more implication to AI and that is bugs are naturally programmed or say it[somehow they know and avoid eating poisonous plants and prey] has already been encoded in their genomes.

So, is it harder even to enter in first step to create self aware and at least human like intelligence? We have microchips and super fast processors but can they withstand against neurons of bugs?
Computer hardware does have some significant advantages over biological nervous tissue: these advantages indirectly aid the development of AI. The following points are paraphrased from Roger Penrose in his essay “Setting the scene: the Claim and Issues” from the volume “The Simulation of Human Intelligence” (1993, Blackwell). Firstly, Electronic circuits are already about a million times faster than speed of a nerve cell transmitting an impulse. Secondly, electronic circuits have an immense advantage over brains in terms of precision in timing, and accuracy of action. One major pitfall is that no neural network yet constructed has anywhere near the multitude of synapses (ie connections between neurones) that occur in a biological brain, but this may be overcome in time. Moravec H., in his book “mind children” (1988) makes a very valid point in support of the capability of computer hardware for use in AI. He reminds us that the rate of development of computer technology has been accelerating for the past half century: what basis have sceptics in saying that this rate will drop suddenly?

On the other hand, it must be said that biological nerve tissue (ie the material that makes up the human brain) has advantages over computer hardware, namely the capacity for major error tolerance. This applies in terms of both physical and processing capabilities. If a human brain is damaged it will carry on functioning to the best of its ability- this cannot be said for computer hardware at present. If a problem develops in the coding of a computer’s program, it will either ‘crash’ or output ‘gobbledegook’: the human mind is very error tolerant. Some important advances have been made recently in developing computer hardware and software with capabilities nearer to those of a biological brain and ‘mind’, namely heuristics, neural networks: “models of the logical properties of interconnected nerve cells” (quote from Garnham A.’s Introduction to Artificial Intelligence, 1988) and fuzzy logic.

This discussion about computer hardware leads onto the question raised by Sloman A. in “Computers and Thought” of whether the human mind is purely a symbol manipulator. Computers are purely symbol manipulators, so if the human mind is too then this significantly increases the ease of simulating it on computers. However, there may be other operations the human brain is capable of: for example, non-symbolic operations (possibly emotions) or operations that occur below the level of conventional symbol processing (possibly seeing and distinguishing objects).

The crux of the problem in dealing with issues of artificial intelligence is the definition of the word ‘intelligent’. Obviously, the definition provided by a conventional dictionary is not enough because it would be too vague and non-technical. In “Computers and Thought” Sloman states three key features of intelligence: Intentionality, Flexibility and Productive Laziness. On their own these labels are fairly meaningless, definitions are required. (The definitions below are adapted from those given by Sloman in Computers & Thought)

Sloman states that intensionality is “the ability to have internal states that refer to or are about entities or situations more or less remote in space or time, or even non-existent or wholly abstract things.” this definition includes thoughts or desires about the mind in question’s own state, ie various forms of self-consciousness.

Flexibility is the variety of things intentional states can refer to, for instance the variety of types of goals, objects, problems, plans, actions, environments etc, with which an individual can cope, including the ability to deal with new situations using old resources combined and transformed in new ways.

Productive laziness involves avoiding unnecessary work. In the real world almost every task involves so many choices from so many options that to solve a task by enumerating all the possible actions and outcomes would be extremely wasteful of processing time and power. Lazy shortcuts are required, for example testing partial combinations of options to see whether they can possibly be extended to reach the goal of the task- if not they can be rejected at once. Being lazy in this way is usually intellectually harder, yet faster- and speed of processing (or ‘thought’) in the real world is essential for survival.

Sloman’s “three key features” explained here do seem to make a good summary of some of the prerequisites of intelligence, but he makes mention only of “self-consciousness” rather than consciousness itself. The difference between the two terms is important. Self-consciousness is awareness of one’s internal states, and memory of the internal states which have previously occurred; whereas consciousness is a much broader term. Sloman makes no comment on the issue of whether consciousness is required for intelligence- in doing so he avoids enter the lengthy debate. An outline of this debate is attempted to be made in the following paragraph.

To tackle the issue of consciousness, a technical definition is required- something that AI researchers have been arguing over for quite some time. Even now there are in fact many variations of opinions. For sake of conciseness only two will be considered in this essay. Aleksander (professor of neural systems engineering at imperial college, London) refers to the Chambers 20th Century English Dictionary in giving his opinion of the definition of consciousness: “The Waking State of the mind; the knowledge the mind has of anything”. Aleksander postulates a number of attributes for the “waking state” of the mind: learning, language, planning, attention and inner perception. Searle, on the other hand, argues that consciousness is a natural biological phenomenon that occurs because the brain is not a digital computer but a “specific biological organ”. This is an anti-AI point of view. In other words it states that a simulation of a biological brain is only as ‘real’ as a simulation of a liver or kidney. Penrose (in his previously mentioned essay “setting the scene: The Claim and Issues) makes some important comments about this ‘only simulation’ argument:

“If all the external manifestations of a conscious brain can indeed be simulated entirely computationally then there would be a case for accepting that its internal manifestations- consciousness itself- are also present in association with such a simulation” note that Penrose states that this ‘operational argument’ is not entirely conclusive, yet it does have some considerable force.

Philip Johnson-Laird makes an interesting comment on the terminology of the word ‘consciousness’ in his book “The computer and the Mind”.When riding a bicycle you do not think “I must turn the handle bars so that the curvature of my trajectory is proportional to the angle of my unbalance divided by the square of my speed” these computations are carried out unconsciously. According to the argument ‘intelligence must involve consciousness’ then the process of a human riding a bike is not intelligent, instead the intelligent part includes the way in which the method of bike riding was learnt, together with Sloman’s three key features as previously described (Intensionality, Productive Laziness & Flexibility).

The discussion on consciousness here illustrates a major philosophical stumbling block, but it leads away from the thrust of the statement made in the title. The point made here is that consciousness is not necessarily required for all types of intelligence- it is a term that comprises many different interrelated components and levels. Now the question arises as to whether we can quantify the importance of the different constituents of intelligence. It seems that its constituents are not static and their importance vary according to the task in hand.

The view held in the title of this essay is common amongst lay-people. At present there is no conclusive evidence that an AI system will or will not be capable of reaching a level of intelligence parallel to that of human thought and behaviour: so the view held in the title is not entirely invalid. The one important point that the statement misses is that an AI system is any system that exhibits some form or aspect of intelligence. This has already been achieved in systems carrying out tasks such as reasoning, learning, planning and other functions- all of which are accepted as aspects of intelligence. In this respect, the title can be seen as an incorrect and naive statement.

Is Utopia of Cyborg Falling Down.?

The way to get to utopia is to model your view of human nature and then invent a technology to control or direct that model — whether a political technology like the one Thomas Hobbes portrays in his Leviathan, a biological technology as in Aldous Huxley’s Brave New World, a psychological technology as in B. F. Skinner’s Walden Two, epistemo-technologies, as in Bacon’s The New Atlantis, information technologies as in Orwell’s 1984, or just plain old technology generally, as in H.G. Wells’ A Modern Utopia. I call these utopian visions “technologies” because they are deterministic in all senses of that word: systems that seek and believe in perfect control. When the human is inserted into the utopian system, the result is a feedback loop, in which the system encourages the “best” part and controls the “worst” part of human nature, while the human, in return, maintains the system with material, energy, information, flesh, and spirit.

In other words, the result of the inscription of a utopian vision onto a human is a cyborg: a natural organism linked for its survival and improvement to a cybernetic system. Of all the great utopianists, Sir Thomas More, Francis Bacon, Campanella, Restif de la Bretonne, Locke, Rousseau…, it is Thomas Hobbes in Leviathan (1651) who understands the essentially cyborg quality of utopia.

Seeing [that] life is but a motion of limbs, the beginning whereof is in some principal part within, why may we not say that all automata (engines that move themselves by springs and wheels as doth a watch) have an artificial life? For what is the heart but a spring, and the nerves but so many strings; and the joints but so many wheels giving motion to the whole body such as was intended by the Artificer? Art goes yet further, imitating that rational and most excellent work of nature, man. For by art is created that great Leviathan called a Common wealth or a State [in Latin, civitas] which is but an artificial man, though of greater stature and strength.

Scratch the model for a utopia and you get a blueprint of human nature. As we revise our technologic, different versions of utopia become imaginable, which in turn are fed by and feed into different versions of the human, which in turn are fed by and feed into new technologies, and so on, creating a feedback loop the byproduct of which is an ever more sophisticated version of the cyborg, whose generations can be measured by the turns of this spiralling loop.

The blueprint of human nature has always been subject to revision. But never as radically as now, when our own utopian technologies are physically transcribing themselves onto our bodies and re-creating the human in their own image, or forcing our evolution into what many have come to call the “posthuman” through a combination of mechanistic and genetic-biological manipulations. In short, the posthuman is the inscription of the ultimate controlling technology onto the human, the cybernetic technologies of selfhood, of mental identity, of cognition, of the mind, of intelligence itself, of communication, of language, and of The Code. To that extent, we are all cyborgs already, controlled by the systems we’ve embraced or which have embraced and defined us through our media, our computers, our systems of communication. For this reason, virtual reality, or cyberspace, is the perfect expression of postmodern trends.

Robots: Threat to Human

What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smartphones?

AI is becoming the stuff of future scifi greats: A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Real AI effects are closer than you might think, with entirely automated systems producing new scientific results and even holding patents on minor inventions. The key factor in singularity scenarios is the positive-feedback loop of self-improvement: once something is even slightly smarter than humanity, it can start to improve itself or design new intelligences faster than we can leading to an intelligence explosion designed by something that isn’t us.

Artificial intelligence will surpass human intelligence after 2020, predicted Vernor Vinge, a world-renowned pioneer in AI, who has warned about the risks and opportunities that an electronic super-intelligence would offer to mankind.

Exactly 10 years ago, in May 1997, Deep Blue won the chess tournament against Gary Kasparov. “Was that the first glimpse of a new kind of intelligence?” Vinge was asked in an interview with Computerworld.

“I think there was clever programming in Deep Blue,” Vinge stated in the interview, “but the predictable success came mainly from the ongoing trends in computer hardware improvement. The result was a better-than-human performance in a single, limited problem area. In the future, I think that improvements in both software and hardware will bring success in other intellectual domains.”

It seems plausible that with technology we can, in the fairly near future, create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event — such a singularity — are as unimaginable to us as opera is to a flatworm.

Vinge is a retired San Diego State University professor of mathematics, computer scientist, and science fiction author who is well-known for his 1993 manifesto, “The Coming Technological Singularity, in which he argues that exponential growth in technology means a point will be reached where the consequences are unknown.

Alarmed by the rapid advances in artificial intelligence, also commonly called “AI”, a group of computer scientists met to debate whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Scientists, reported CIO Today, pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination that have reached the “cockroach” stage of machine intelligence.

While the computer scientists agreed that we are a long way from one of film’s great all-time evil villains, the Hal 9000, the computer that took over the Discovery spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

Eric Horvitz of Microsoft said he believed computer scientists must considered seriously the possibility of superintelligent machines and artificial intelligence systems run amok.

“Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture.”

This sentiment is best illustrated by the creation of Singularity University,a joint Google/NASA venture that has begun offering courses to prepare a “cadre” to help society cope with future ramifications.

An advanced academic institution sponsored by leading lights including NASA and Google (so it couldn’t sound smarter if Brainiac 5 traveled back in time to attend the opening ceremony). The “Singularity” is the idea of a future point where super-human intellects are created, turbo-boosting the already exponential rate of technological improvement and triggering a fundamental change in human society – after the Agricultural Revolution, and the Industrial Revolution, we would have the Intelligence Revolution.

The Singularity University proposes to train people to deal with the accelerating evolution of technology, both in terms of understanding the directions and harnessing the potential of new interactions between branches of science like artificial intelligence, genetic engineering and nanotechnology.

Inventor and author Raymond Kurzweil is one of the forces behind SU, which we presume will have the most awesomely equipped pranks of all time (“Check it out, we replaced the Professor’s chair with an adaptive holographic robot!”), and it isn’t the only institutions he’s helped found. There’s also the Singularity Institute for Artificial Intelligence whose sole function is based on the exponential AI increases predicted. The idea is that the first AI created will have an enormous advantage over all that follow, upgrading itself at a rate they can never catch up on simply because it started first, so the Institute wants to work to create a benevolent AI to guard us against all that might follow.

Make no mistake: the AI race is on, and Raymond wants us to win.

Alac M (2009). Moving android: on social robots and body-in-interaction. Social studies of science, 39 (4), 491-528 PMID: 19848108

Why Robots Can’t Rule Over…?

robotAutonomous robots, equipped with inbuilt laser weapon and having a higher order of artificial intelligence, wandering here and there in a computer city, favourite theme of sci fi often depicted as one of our possible future where machines rules over human . In these machine ruled cities humans are just as replacement of slaves. Perhaps, sci fi depictions are real to some extent but how? Once Eric Drexler, founder of nanotechnology, predicted in his one of novel, of robots that were able to replicate themselves and they ruled over Earth wiping out humans. Somewhat matrix like continuum. We are even now slave of machines to some extent. robot_uprisingArtificially intelligent machines could really wreck havoc on humanity and may be possible they wipe out us to colonize planet and later solar system and galaxy. If we somehow created artificial intelligent machines, I don’t think they may be ruler instead of dumb slave. I don’t find them as threat to humanity until we provide them artificial emotions of victory, defeat or hostility against humans and then it would be worst case for whole humanity . It is emotions that motivate us to rule over one or attack and etc. Robots or so called killer machines will only be threat if we lass them with artificial intelligence and emotions of hostility against humans. And no crazy scientist is going to do so.

%d bloggers like this: