FBI on The Existence of Aliens and UFOs

By Ste Webb
1. Part of the disks carry crews, others are under remote control.
2. Their mission is peaceful. The visitors contemplate settling on this plane.
3. These visitors are human-like but much larger in size.
4. They are not excarnate Earth people, but come from their ownworld.
5. They do NOT come from a planet as we use the word, but from an etheric planet which interpenetrates with our own and is not perceptible to us.
6. The bodies of the visitors, and the craft, automatically materialize on entering the vibratory rate of our dense matter.
7. The disks posses a type of radiant energy or a ray, which will easily disintegrate any attacking ship. They reenter the etheric at will, and so simply disappear from our vision, without trace.
8. The region from which they come is not the “astral plane”, but corresponds to the Lokas or Talas. Students of osoteric matters will understand these terms.
9. They probably can not be reached by radio, but probably can be by radar. if a signal system can be devised for that. (apparatus
Addendum: The Lokas are oval shape, fluted length oval with a heat-resistaning metal or alloy not yet known the front cage contains the controls, the middle portion a laboratory; the rear contains armament, which consists essentially of a powerful energy apparatus, perhaps a ray weapons.)

Why Send Humans to Mars? Looking Beyond Science

By Pabulo Henrique Rampelotto

In the last decade, the human exploration of Mars has been a topic of intense debate. Much of the focus of this debate lies on scientific reasons for sending, or not sending, humans to Mars. However, the more profound questions regarding why our natural and financial resources should be spent on such endeavor have not been addressed in a significant way. To be successful, the human exploration of Mars needs reasons beyond science to convince the public. People are far more interested in the short-term outcome of exploration than any nebulous long-term benefits. Finding the right balance of science and other factors is critical to convince taxpayers to part with $100 billion or more of their money over the next couple of decades to fund such endeavor. In the following, I briefly explain why the colonization of Mars will bring benefits for humans on Earth, looking beyond scientific reasons.

The engineering challenges necessary to accomplish the human exploration of Mars will stimulate the global industrial machine and the human mind to think innovatively and continue to operate on the edge of technological possibility. Numerous technological spin-offs will be generated during such a project, and it will require the reduction or elimination of boundaries to collaboration among the scientific community. Exploration will also foster the incredible ingenuity necessary to develop technologies required to accomplish something so vast in scope and complexity. The benefits from this endeavor are by nature unknown at this time, but evidence of the benefits from space ventures undertaken thus far point to drastic improvement to daily life and potential benefits to humanity as whole.

One example could come from the development of water recycling technologies designed to sustain a closed-loop life support system of several people for months or even years at a time (necessary if a human mission to Mars is attempted). This technology could then be applied to drought sufferers across the world or remote settlements that exist far from the safety net of mainstream society. The permanence of humans in a hostile environment like on Mars will require careful use of local resources. This necessity might stimulate the development of novel methods and technologies in energy extraction and usage that could benefit terrestrial exploitation and thus improve the management of and prolong the existence of resources on Earth.

The study of human physiology in the Martian environment will provide unique insights into whole-body physiology, and in areas as bone physiology, neurovestibular and cardiovascular function. These areas are important for understanding various terrestrial disease processes (e.g. osteoporosis, muscle atrophy, cardiac impairment, and balance and co-ordination defects). Moreover, medical studies in theMartian environment associated with researches in space medicine will providea stimulus for the development of innovative medical technology, much of which will be directly applicable to terrestrial medicine. In fact, several medical products already developed arespace spin-offs including surgically implantable heart pacemaker, implantable heart defibrillator, kidney dialysis machines, CATscans, radiation therapy for the treatment of cancer, among many others. Undoubtedly, all these space spin-offs significantly improved the human`s quality of life.

At the economical level, both the public and the private sector might be beneficiated with a manned mission to Mars, especially if they work in synergy. Recent studies indicate a large financial return to companies that have successfully commercialized NASA life sciences spin-off products. Thousands of spin-off products have resulted from the application of space-derived technology in fields as human resource development, environmental monitoring, natural resource management, public health, medicine and public safety, telecommunications, computers and information technology, industrial productivity and manufacturing technology and transportation. Besides, the space industry has already a significant contribution on the economy of some countries and with the advent of the human exploration of Mars, it will increase its impact on the economy of many nations. This will include positive impact on the economy of developing countries since it open new opportunities for investments.

To conclude, the human exploration oftthe red planet will significantly benefit all the humanity since it has the potential to improve human`s quality of life, provide economic returns to companies, stimulate the economy of many nations including developing countries and promote international collaboration.

Here is a series of ‘Analysis of Evidence of Life On Mars’…

Trapping the Antimatter!

Creating matter’s strange cousin antimatter is tricky, but holding onto it is even trickier. Now scientists are working on a new device that may be able to trap antimatter long enough to study it.
Antimatter is like a mirror image of matter. For every matter particle (say an electron, for example), a matching antimatter particle is thought to exist (in this case, a positron) with the same mass, but an opposite charge.

The problem is that whenever antimatter comes into contact with regular matter, the two annihilate. So any container or bottle made of matter that attempts to capture antimatter inside would be instantly destroyed, along with the precious antimatter sample one tried to put inside the bottle.

Physicist Clifford Surko of the University of California, San Diegois hard at work to overcome that issue. He and his colleagues are building what they call the world’s largest trap for low-energy positrons – a device they say will be able to store more than a trillion antimatter particles at once.

The key is using magnetic and electric fields, instead of matter, to construct the walls of an antimatter “bottle.”

“We are now working to accumulate trillions of positrons or more in a novel ‘multicell’ trap– an array of magnetic bottles akin to a hotel with many rooms, with each room containing tens of billions of antiparticles.”

Surko presented his work today (Feb. 18) here at the annual meeting of the American Association for the Advancement of Science.

The researchers are also developing methods to cool antiparticles to super-cold temperatures so that the particles’ movements are slowedand they can be studied. The scientists also want to compress large clouds of antiparticles into high-density clumps that can be tailored for practical applications.

“One can then carefully push them out of the bottle in a thin stream, a beam, much like squeezing a tube of toothpaste. These beams provide new ways to study how antiparticles interact or react with ordinary matter. They are very useful, for example, in understanding the properties of material surfaces.”

Surko said another project is to create a portable antimatter bottle that could be taken out of the lab and into various industrial and medical situations.
“If you could have a portable trap it would greatly amplify the uses and applications of antimatter in our world.”

Antimatter may sound exotic, butit’s already used in everyday technology, such as medical PET (Positron Emission Tomography) scanners. During a PET scan, the patient is injected with radioactive tracer molecules that emit positrons when they decay. These positrons then come into contact with electrons in the body, and the two annihilate, releasing two gamma-ray photons. The gamma-ray photons are then detected by the scanner, giving a 3-D image of what’s going on inside the body.
[Via: LiveScience]

Multifunctional Carbon Nanotubes – Introduction and Applications of Multifunctional Carbon Nanotubes

This animation of a rotating carbon nanotube g...

Image via Wikipedia

Over the past several decades there has been an explosive growth in research and development related to nano materials. Among these one material, carbon Nanotubes, has led the way in terms of its fascinating structure as well as its ability to provide function-specific applications ranging from electronics, to energy and biotechnology. Carbon nanotubes (CNTs) can be viewed as carbon whiskers, which are tubules of nanometer dimensions with properties close to that of an ideal graphite fiber. Due to their distinctive structures they can be considered as matter in one-dimension (1D).

In other words, a carbon nanotube is a honeycomb lattice rolled on to itself, with diameters of the order of nanometers and lengths of up to several micrometers. Generally, two distinct types of CNTs exist depending whether the tubes are made of more than one graphene sheet (multi walled carbon nanotube, MWNT) or only one graphene sheet (single walled carbon nanotube, SWNT). For a detailed description on CNTs please refer to the article by Prof. M. Endo.

A Truly Multifunctional Material

Irrespective of the number of walls, CNTs are envisioned as new engineering materials which possess unique physical properties suitable for a variety of applications. Such properties include large mechanical strength, exotic electrical characteristics and superb chemical and thermal stability. Specifically, the development of techniques for growing carbon nanotubes in a very controlled fashion (such as aligned CNT architectures on various substrates ) as well as on a large scale, presents investigators all over the world with enhanced possibilities for applying these controlled CNTs architectures to the fields of Vacuum microelectronics, Cold-cathode flat panel displays, Field emission devices, Vertical interconnect assemblies, Gas breakdown sensors, Bio Filtration, On chip thermal management, etc.

Apart from their outstanding structural integrity as well as chemical stability, the property that makes carbon nanotubes truly multifunctional in nature is the fact that carbon nanotubes have lot to offer (literally) in terms of specific surface area. Depending on the type of CNTs the specific surface areas may range from 50 m2/gm to several hundreds of m2/gm and with appropriate purification processes the specific surface areas can be increased up to ~1000 m2/gm.

Extensive theoretical and experimental studies have shown that the presence of large specific surface areas is accompanied by the availability of different adsorption sites on the nanotubes. For example, In CNTs produced using catalyst assisted chemical vapor deposition the adsorption occurs only on the outer surface of the curved cylindrical wall of the CNTs. This is because the production process of the CNTs using metal catalysts usually leads to nanotubes with closed ends, thereby restricting the access of the hollow interior space of the tube.

However, there are simple procedures (mild chemical or thermal treatments) which can remove the end caps of the MWNTs thereby presenting the possibility of another adsorption site (inside the tube) in MWNTs as schematically shown in Figure 1. Similarly, the large scale production process of SWNTs lead to the bundling of the SWNTs. Due to this bundling effect, SWNT bundles provide various high energy binding sites (for example grooves, Figure 1.). What this means is then that large surfaces are available in small volume and these surfaces can interact with other species or can be tailored and functionalized.

Figure 1: Possible binding sites available for adsorption on (left) MWNTs and (right) SWNTs surfaces.

Our group’s own research interests are directed into utilizing these materials in different applications related to energy and the environment, where their high specific surfaces areas play a crucial role. Two of such energy related applications are discussed below:

  • CNT Based Electrochemical Double Layer Capacitors
  • CNT Based catalyst support

CNT Based Electrochemical Double Layer Capacitors

Electrochemical Double Layer Capacitors (EDLC’s: Also referred to as Super Capacitors and Ultra-Capacitors) are envisioned as devices that will have the capability of providing high energy density as well as high power density. With extremely high life-span and charge-discharge cycle capabilities EDLC’s are finding versatile applications in the military, space, transportation, telecommunications and nanoelectronics industries.

An EDLC contains two non reactive porous plates (electrodes or collectors with extremely high specific surface area), separated by a porous membrane and immersed in an electrolyte. Various studies have shown the suitability of CNTs as EDLC electrodes. However, proper integration of CNTs with collector electrodes in EDLCs are needed for minimizing the overall device resistance in order to enhance the performance of CNT based supercapacitors. A strategy for achieving this could be growing CNTs directly on metal surfaces and using them as EDLC electrodes (Figure 2). EDLC electrodes with very low equivalent series resistance (ESR) and high power densities can be obtained by using such approaches.

Figure 2: (a) Artist rendition of EDLC formed by aligned MWNT grown directly on metals (b) An electrochemical impedance spectroscopy plot showing low ESR of such EDLC devices and (c) very symmetric and near rectangular cyclic voltamograms of such devices indicating impressive capacitance behavior.

CNT Based Catalyst Support

Catalysts play an important role in our existence today. Catalysts are small particles (~ 10-9 meter, or nanometer) which due to their unique surface properties can enhance important chemical reactions leading to useful products. In any kind of catalytic process, the catalysts are dispersed on high surface area materials, known as the catalyst support. The support provides mechanical strength to the catalysts in addition to enhance the specific catalytic surface and enhancing the reaction rates. CNTs, due to their high specific surface areas, outstanding mechanical as well as thermal properties and chemically stability can potentially become the material of choice for catalyst support in a variety of catalyzed chemical reactions.

We are presently exploring the idea of using CNTs as catalyst support in the Fischer Tropsch (FT) synthesis process. The FT reaction can convert a mixture of carbon monoxide and hydrogen in to a wide range of straight chained and branched olefins and paraffins and oxygenates (leading to the production of high quality synthetic fuels). Our preliminary FT synthesis experiments on CNT supported FT catalysts (generally cobalt and iron) shows that the conversion of CO and H2 obtained with FT catalyst loaded CNTs is orders of magnitude higher than that obtained with conventional FT catalysts (Figure 3), indicating that CNTs offer a new breed of non-oxide based catalyst supports with superior performance for FT synthesis.

Figure 3:CNT paper used as catalyst support for FT synthesis and comparison of conversion ratio’s of Co and H2

So far, CNT research has provided substantial excitement, and novel possibilities in developing applications based on interdisciplinary nanotechnology. The area of large scale growth of CNTs is quiet mature now and hence it could be expected that several solid large volume applications will emerge in the near future.

[Source: Azonano]

Futurism: Social and Legal Rights of Robots

By R. A. Freitas

If we give rights to intelligent machines, either robots or computers, we’ll also have to hold them responsible for their own errors. Robots, by analogy to humans, must conform to a “reasonable computer” standard. Sentient computers and their software should be held to the standard of competence of all other data processing systems of the same technological generation. Thus, if all “sixth generation” computers ought to be smart enough to detect bogus input in some circumstances, then given that circumstance, a court will presume that a “sixth generation” computer knew or should have known the input data were bogus.

Exactly who or what would be the tortfeasor in these cases? Unlike a living being whose mind and body are inseparable, a robot’s mind (software) and bodyare severable and distinct. This is an important distinction. Robot rights most logically should reside in the mechanism’s software (the programs executing in the robot’s computer brain) rather than in its hardware.

This can get mighty complicated. Robots could be instantly reprogrammed, perhaps loading and running a new software applications package every hour. Consider a robot who commits a felony while running the aggressive “Personality A” program, but is running mild-mannered ‘Personality M” when collared by the police. Is this a false arrest? Following conviction, are all existing copies of the criminal software package guilty too, and must they suffer same punishment? (Guilt by association?) If not, is it double jeopardy to take another copy to trial? The robot itself could be released with its aggressive program excised from memory, but this may offend our sense of justice.

The bottom line is it’s hard to apply human laws to robot persons. Let’s say a human shoots a robot, causing it to malfunction, lose power, and “die.” But the robot, once “murdered,” is rebuilt as good as new. If copies of its personality data are in safe storage, then the repaired machine’s mind can be reloaded and up and running in no time – no harm done and possibly even without memory ofthe incident. Does this convert murder into attempted murder? Temporary roboslaughter? Battery? Larceny of time? We’ll We’ll probably need a new class of felonies or “cruelty to robots” statutes to deal with this.

If robots are persons, will the Fifth Amendment protect them from self-incrimination? Under present law, a computer may be compelled to testify, even against itself, without benefit of the Fifth Amendment. Can a warrant be issued to search the mind of a legal person? If not, how can we hope to apprehend silicon-collar criminals in a world of electronic funds transfer and Quotron stock trading?

How should deviant robots be punished? Western penal systems assume that punishing the guilty body punishes the guilty mind – invalid for computers whose electromechanical body and software mind are separable. What is cruel and unusual punishment for a sentient robot? Does reprogramming a felonious computer person violate constitutional privacy or other rights?

Robots and software persons areentitled to protection of life and liberty. But does “life” imply the right of a program to execute, or merely to be stored? Denying execution would be like keeping a human in a permanent coma – which seems unconstitutional. Do software persons have a right to data they need in order to keep executing? Can robot citizens claim social benefits? Are unemployed robo-persons entitled to welfare? Medical care, including free tuneups at the government machine shop? Electricity stamps?Free education? Family and reproductive rights? Don’t laugh. A recent NASA technical study found that self-reproducing robots could be developed today in a 20-year Manhattan-Project-style effort costing less than $10 billion (NASA Conference Publication 2255, 1982).

In the far distant future, there may be a day when vociferous robo-lobbyists pressure Congress to fund more public memory banks, more national network microprocessors, more electronic repair centers, and other silicon-barrel projects. The machines may have enough votes to turn the rascals out or even run for public office themselves. One wonders which political party or social class the “robot bloc” will occupy.

In any case, the next time that Coke machine steals your quarter,better think twice before you kickit. Someday you may need a favor.

Rogue Planets Could Harbour Life!

In recent years, computers have become powerful enough to simulate the formation and evolution of planetary systems over many billions of years. One of the surprises to come out this work is that planets are regularly kicked out of these systems by slingshot effects. By some calculations, this fate may still await planets in our own Solar System. One interesting question is whether these so-called “rogue planets” could ever support life in the cold dark reaches of interstellar space.

Today, Dorian Abbot and Eric Switzer at the University of Chicago give us an answer. The generally accepted criteria for life is the presence of liquid water. They calculate that an Earth-like rogue planet could support liquid oceans if the water were heated from below by the planet’s core and insulated from above by a thick layer of ice. Their reasoning is straightforward. They define an Earth-like planet to have dimensions within an order of magnitude of Earth’s and having a similar composition. They then calculate the heat flux from the core and suggest that the thickness of the ice above would reach a steady state in about a million years. That’s much shorter than the lifetime of a hot core.

Note that this is some what different from the mechanism that keeps the subglacial ocean on Europa liquid. Here tidal forces play an important role and this generates heat within the ocean itself. By contrast, all the heat must come from the core of a rogue planet and travel through the ocean, One important unknown is the role that convection and conduction play in the less viscous regions of ice. Since convection carries heat much more quickly than conduction, this is an important factor and could potentially make the difference between the existence of liquid oceans or solid ice.

But with reasonable assumptions Abbot and Switzer say that a planet just 3.5 times the mass of Earth could maintain a liquid ocean. Even more surprising is their conclusion that a planet with a higher fraction of water need only be 0.3 times the size of Earth and still have a liquid ocean. That’s smaller than Venus but bigger than Mars. They call such a body a Steppen wolf planet “since any life in this strange habitat would exist like a lone wolf wandering the galactic steppe.” It’s not hard imagine the possibility of life evolving around hydrothermal vents before the planet’s ejection or even afterwards. These are exciting calculations.

Steppenwolf planets would provide one way for life to spread through the galaxy. And if any come within a 1000 AU of our Sun, the reflected sunlight from them ought to be visible in the far infrared to the next generation of telescopes.That raises an interesting idea: the possibility of visiting such a place. Any passers by would certainly be easier to get to than planets orbiting other stars.

Time to get out the binoculars and lens cloths and start looking.

[Ref:arxiv.org/abs/1102.1108: The Steppenwolf: A Proposal For A Habitable Planet in Interstellar Space, Credit: Arxiv Blog]

Fate of Our Civilization and Tactics

Oxygen content of the atmosphere over the last...

Image via Wikipedia

By J. R. Mooneyham

Extinction. Or collapse into a permanent medieval (or worse) state of anarchy and deprivation. These appear to be the normal ends of technological civilizations in our galaxy, based on everything we know circa early 2003. The above statement is not made lightly. Rather, it is a conclusion based on more than a decade of dedicated research into the matter.

The Fermi Paradox which contrasts the 100% probability of life and intelligence developing on Earth against the thunderous silence from the heavens so far (no alien signals) may be resolved by four things: One, gamma ray bursters which may have effectively prohibited the development of sentient races until only the last 200 million years; Two, the lengthy gestation period required for the emergence of intelligence (which almost requires the entire useful lifespan of a given planet, based on our own biography); Three, the need for an unusually high measure of stability in terms of climate over hundreds of millions of years (the ‘Goldilocks’ scenario, enabled by a huge natural satellite like our Moon moderating the tilt of a planet’s axis, as well as gas giants parked in proper orbits to mop up excess comets and asteroids to reduce impact frequencies for a living world); and Four, an extremely dangerous 600 year or so ‘gauntlet’ of challenges and risks most any technological society must survive to become a viable long term resident of the galaxy (i.e. getting a critical mass of population and technology off their home world, among other things). That 600 year period may be equivalent to our own span between 1900 AD and 2500 AD, wherein we’ll have to somehow dodge the bullets of cosmic impacts, nuclear, biological, and nanotechnological war, terrorism, mistakes, and accidents, as well as food or energy starvation, economic collapse, and many other threats, both natural and unnatural. So far it appears (according to SETI results and other scientific discoveries) extremely few races likely survive all these.

There’s six major guiding principles by which to defend civilization against all the worst possible threats to its future:

  • One, remove or minimize the sources of all reasonable motivations to harm others from the entirety of humanity– as well as the means to carry out such harm
  • Two, put into place and maintain robust structural impediments to, and socio-economic discouragements of, the domination of the many by a wealthy, powerful, or charismatic few
  • Three, insure the utmost education and technological empowerment possible of the average individual world citizen, wherever this does not unreasonably conflict with the other principles listed here.
  • Four, work to preserve existing diversity in life on Earth and its natural environments, as well as in human behavior, culture, media, languages, and technologies, and even nourish expansion in such diversity within human works, wherever this may be accomplished with minimal conflict regarding the other principles listed here.
  • Five, excesses in intellectual property protections, censorship, and secrecy all basically amount to the same thing, so far as posing threats to the robustness, prosperity, security (and even survival) of civilization is concerned. Therefore all three must be deliberately and perpetually constrained to the absolute minimum applications possible to protect humanity. In these matters it would typically be far better to err on the side of accessibility, openness, and disclosure, than the other.
  • Six, seek out and implement ever better ways to document human knowledge and experience in the widest, deepest, and most accurate fashions possible for both the present and future of humanity, and offer up this recorded information freely to the global public for examination. This means the more raw the data, and the more directly sourced, the better. The more raw the data and less colored by opinions of the day, the better present and future citizens will be able to apply ever improving tools of scientific analysis to derive accurate results, and drive important decisions.

Work faithfully and relentlessly to implement and continue the enforcement of these six principles into perpetuity (always seeking the optimal balance between them all), and you should reduce overall risk levels for civilization to that stemming from true mental illness or pure accidents.

Robust and enlightened public health programs (among other things) can reduce the total risk of mental illness to society to negligible levels. That would leave the risk of accidents to deal with. Reducing the risks presented from various accidental events is another subject in itself, that I’ll leave to others to address.

Especially in a world where shortages of money, talent, knowledge, and time still define more of our economics and society, than anything else. Anyone working to achieve one or more of these aims immediately encounters active opposition from various quarters too. That may sound hard to believe, but look at a few examples: Cuts in military spending even in the most advanced and highly developed nations like the USA face stiff opposition from many politicians because defense cuts are apparently less popular with voters than defense budget increases– almost no matter how peaceful the world happens to be at the time. Any cuts that do somehow get passed can often only be implemented by shutting down unneeded bases or various extravagant weapons programs. But either of those considerations bring up cries of “lost jobs”, even in good times when those jobs might easily be replaced with other, less lethal ones. Weapons proliferation around the world likewise is often defended as generating jobs at home, despite the fact those weapons often end up being used by naughty allies to kill innocents in conflicts where we ourselves have little or no involvement– except for our brand name and label being prominently emblazoned on the blasted shards in various scenes of mass death and destruction. Later on we often wonder why people on the receiving end of these weapons (in the hands of others) hate us so. And sometimes the weapons we sell end up being used against our own soldiers. But still we sell and sometimes even give them away.

Maybe aiding in the spread of democracy and free speech through the world would seem an easier goal than stopping the proliferation of weapons and weapon technologies? Sorry, but no. Indeed, here in America our track record for a long time now is behavior that says democracy and free speech is too good for lots of folks other than ourselves. You see, the ill will built up from all that weapons proliferation, plus other actions on our part, has resulted in lots of countries where we’d be tossed out on our ear if real democracies suddenly sprang up in them.

Like what actions am I talking about? Things like manipulating elections and interfering with other attempts at legitimate changeovers in power in foreign countries. CIA involvement to prop up dictatorships with whom we have deals for things like oil or other items. Stuff like that. There’s no telling how many democratic movements we’ve helped crush or cause to be stillborn around the world in the past century. Of course, you could say we were just emulating our parent countries such as those of western europe, which did many of the same things for several centuries before we ourselves successfully rebeled againstthem.

It’s almost like we don’t want any other rebellions to succeed, in order to retain our own ‘special place’ in history. But is that fair? No.

Of course, sometimes a nation manages to overthrow its oppressors despite our opposition and dirty tricks. But when that happens, our previous sins in the conflict result in whatever new government emerges being dead-set against us. Like in Iran, with the fall of the Shah. Our interference with their internal affairs so antagonized and polarized the Iranians that one result was eventual domination of the country by an Islamic extremist movement, which managed to overthrow the US-supported Shah. And naturally, when things didn’t go our way there we froze Iran’s assets and put in place trade sanctions against them. And in response, they may be seeking to obtain their own weapons of mass destruction and supporting various terrorist actions around the world.

Could it be we are gradually arranging our own (maybe even civilization itself’s) spectacular end with all this chicanery? For the longer we continue this type of behavior, the more difficult and scary it becomes to consider stopping it. And the worse the eventual consequences might be. After all, we’re making a lot of enemies out there. A pretty hefty chunk of the human race, in fact. If and when they all finally overthrow their US-supported dictators or oppressive ruling regimes, they might not exactly want to send us flowers.

I vote we try to find a way out of this mess now rather than prolonging and worsening it with politics-and-economics-as-usual. Before it’s too late. Before our world too becomes one of the silent ones in the galaxy.

 

Mass Extinctions Linked to Loss in Biodiversity

Mass extinctions, that occured in Earth’s unrecorded history, are among a few questions yet to be answered. Two of the greatest mass extinctions in Earth’s history may have been caused by the loss of diversity in the oceans. New research shows that the die-off of species may have ultimately led to the collapse of marine ecosystems.The study could be an ominous warning for the future of life on Earth as modern ocean diversity begins to dwindle. Conservation biologists regularly note the precipitous decline of key species, such as cod, bluefin tuna, swordfish and sharks. Lose enough of these top-line predators (among other species), and the fear is that the oceanic web of life may collapse.

In a new paper in Geology, researchers at Brown University, Rhode Island, and the University of Washington used a group of marine creatures similar to today’s nautilus to examine the collapse of marine ecosystems that coincided with two of the greatest mass extinctions in the Earth’s history. They attribute the ecosystems’ collapse to a loss of enough species occupying the same space in the oceans, called ‘ecological redundancy.’ While the term is not new, the paper marks the first time that a loss of ecological redundancy is directly blamed for a marine ecosystem’s collapse in the fossil record. Just as ominously, the authors write that it took up to 10 million years after the mass extinctions for enough variety of species to repopulate the ocean – restoring ecological redundancy – for the ecosystem to stabilise.

It’s definitely a cautionary tale because we know it’s happened at least twice before. And you have long periods of time before you have reestablishment of ecological redundancy. If the theory is true, the implications could not be clearer today.
In effect, weare currently responsible for the sixth major extinction event in the history of the Earth, and the greatest since the dinosaurs’s disappeared, 65 million years ago,” the 2006 report states.

According to the United Nations-sponsored report Global Biodiversity Outlook 2, the population of nearly one-third of marine species that were tracked had declined over the three decades that ended in 2000.The numbers were the same for land-based species. Whiteside and co-author Peter Ward studied mass extinctions that ended the Permian period 250 million years ago and another that brought the Triassic to a close roughly 200 million years ago. Both periods are generally believed to have ended with global spasms of volcanic activity. The abrupt change in climate stemming from the volcanism, notably a spike in greenhouse in the atmosphere, decimated species on land and in the oceans, losing approximately 90% of existing marine species in the Permian-Triassic and 72% in the Triassic-Jurassic.

The widespread loss of marine life and the abrupt change in global climate caused the carbon cycle, a broad indicator of life and death and outside influences in the oceans, to fluctuate wildly. The authors noted these “chaotic carbon episodes” and their effects on biodiversity by studying carbon isotopes spanning these periods.

The researchers further documented species collapse in the oceans by compiling a 50-million-year fossil record of ammonoids, predatory squid-like creatures that lived inside coiled shells, found embedded in rocks throughout western Canada. The pair found that two general types of ammonoids, those that could swim around and pursue prey and those that simply floated throughout the ocean, suffered major losses. The fossil record after the end-Permian and end-Triassic mass extinctions shows a glaring absence of swimming ammonoids, which, because they compete with other active predators including fish, is interpreted as a loss of ecological redundancy.

It means that during these low-diversity times, there are onlyone or two [ammonoids] taxa that are performing. It’s a much more simplified food chain. Only when the swimming ammonoids reappear alongside its floating brethren does the carbon isotope record stabilise and the ocean ecosystem fully recover, the authors report. That’s when we say ecological redundancy is reestablished.

An alternate and more viable theory which was proposed by Victor Babbit is that unleashed sulpher dominated gases had caused the extinction of dinosaurs. Hover your eyes over this paper and analyze whether it is plausible. My answer is “Yes! Why not?”

[Source: CosmosMagazine]

Can the Vacuum Be Engineered for Space Flight Applications? Overview of Theory and Experiments

A Feynman diagram showing the radiation of a g...

Image via Wikipedia

By H. E. Puthoff

Quantum theory predicts, and experiments verify, that empty space (the vacuum) contains an enormous residual background energy known as zero-point energy (ZPE). Originally thought to be of significance only for such esoteric concerns as small perturbations to atomic emission processes, it is now known to play a role in large-scale phenomena of interest to technologists as well, such as the inhibition of spontaneous emission, the generation of short-range attractive forces (e.g., the Casimir force), and the possibility of accounting for sonoluminescence phenomena. ZPE topics of interest for spaceflight applications range from fundamental issues (where does inertia come from, can it be controlled?), through laboratory attempts toextract useful energy from vacuum fluctuations (can the ZPE be “mined” for practical use?), to scientifically grounded extrapolations concerning “engineering the vacuum” (is “warp-drive” space propulsion a scientific possibility?). Recent advances in research into the physics of the underlying  ZPE indicate the possibility of potential application in all these areas of interest.

The concept “engineering the vacuum” was first introduced by Nobel Laureate T. D. Lee  in his book Particle Physics and Introduction to Field Theory. As stated in Lee’s book: ” The experimental method to alter the properties of the vacuum may be called vacuum engineering…. If  indeed we are able to alter the vacuum, then we may encounter some new phenomena, totally unexpected.” Recent experiments have indeed shown this to be the case.

With regard  to space propulsion,  the question of engineering the vacuum can be put succinctly: ” Can empty space itself provide the solution?” Surprisingly enough, there are hints that potential help may in fact emerge quite literally out of the vacuum of so-called ” empty space.” Quantum theory  tells us that empty space is not truly empty, but rather is the seat of myriad energetic quantum processes  that  could have profound  implications  for  future  space travel. To understand these  implications it will serve us to review briefly the historical development of the scientific view of what constitutes empty space.

At the time of the Greek philosophers, Democritus argued that empty space was truly a void, otherwise there would not be room for the motion of atoms. Aristotle, on the other hand, argued equally forcefully that what appeared to be empty space was in fact a plenum (a background filled with substance), for did not heat and light travel from place to place as if carried by some kind of medium? The argument went back and forth through the centuries until finally codified by Maxwell’s  theory of  the  luminiferous  ether, a plenum  that  carried electromagnetic waves, including light, much as water carries waves across its surface. Attempts  to measure  the properties of  this ether, or to measure the Earth’s velocity through the ether (as in the Michelson-Morley experiment), however, met with failure. With the rise of special relativity which did not require reference to such an underlying substrate, Einstein in 1905 effectively banished the ether in favor of the concept that empty space constitutes a true void. Ten years later, however, Einstein’s own development of the general theory of relativity with its  concept of curved space and distorted geometry forced him to reverse his stand and opt for a richly-endowed plenum, under the new label spacetime metric.

It was the advent of modern quantum theory, however, that established the quantum vacuum, so-called empty space, as a very active place, with particlesarising and disappearing, a virtual plasma, and fields continuously fluctuatingabout their zero baseline values. The energy associated with such processes iscalled zero-point energy (ZPE), reflecting the fact that such activity remains even at absolute zero.

The Vacuum As A Potential Energy Source

At its most fundamental level, we now recognize that the quantum vacuum is an enormous reservoir of untapped energy, with energy densities conservatively estimated by Feynman and Hibbs to be on the order of nuclear energy densities or greater. Therefore, the question is, can the ZPE be “mined” for practical use? If so, it would constitute a virtually ubiquitous energy supply, a veritable “Holy Grail” energy source for space propulsion.

As utopian as such a possibility may seem, physicist Robert Forward at Hughes Research Laboratories demonstrated proof-of-principle in apaper,  “Extracting Electrical Energy from the Vacuum by Cohesion of Charged Foliated Conductors.” Forward’s approach exploited a phenomenon called the Casimir Effect, an attractive quantum force between closely-spaced metal plates, named for its discoverer, H. G. B. Casimir of Philips Laboratories in the Netherlands. The Casimir force, recently measured with high accuracy by S. K. Lamoreaux at the University of Washington, derives from partial shielding of  the interior region of the plates from the background zero-point fluctuations of the vacuum electromagnetic field. As shown by Los Alamos theorists Milonni  et  al., this shielding results in the plates being pushed together by the unbalanced ZPE radiation pressures. The result is a corollary conversion of vacuum energy to some other  form such as heat. Proof  that such a process violates neither energy nor  thermodynamic constraints can be found in a paper by a colleague and myself (Cole & Puthoff ) under the title “Extracting Energy and Heat from the Vacuum.”

Attempts to harness the Casimir and related effects for vacuum energy conversion are ongoing in our laboratory and elsewhere. The fact that its potential application to space propulsion has not gone unnoticed by the Air Force can be seen in its request for proposals for the FY-1986 Defense  SBIR Program. Under entry AF86-77, Air Force Rocket Propulsion  Laboratory (AFRPL), Topic: Non-Conventional Propulsion Concepts we f ind the statement: ” Bold,new non-conventional propulsion concepts are solicited…. The specific area sin which AFRPL is interested include…. (6) Esoteric  energy  sources  for propulsion  including  the zero point quantum dynamic energy of vacuum space.”

Several experimental formats for tapping the ZPE for practical use are under investigation in our laboratory. An early one of interest is based on the idea of a Casimir pinch effect in non-neutral plasmas, basically a plasma equivalent of Forward’s electromechanical charged-plate collapse. The underlying physics is described in a paper submitted for publication by myself and a colleague, and it is illustrative that the first of several patents issued to a consultant to our laboratory, K. R. Shoulders(1991), contains the descriptive phrase ” …energy  is provided… and the ultimate source of this energy appears to be the zero-point radiation of the vacuum continuum.” Another intriguing possibility is provided by the phenomenon of sonoluminescence, bubble collapse in an ultrasonically-driven fluid which is accompanied by intense, sub-nanosecond light radiation. Although the jury is still out as to the mechanism of light generation, Nobelist Julian Schwinger (1993) has argued for a Casimir interpretation. Possibly related experimental evidence for excess heat generation in ultrasonically-driven cavitation in heavy water is claimed in an EPRI Report by E-Quest Sciences ,although attributed to a nuclear micro-fusion process. Work is under way in our laboratory to see if this claim can be replicated.

Yet another proposal for ZPE extraction is described in a recent patent(Mead & Nachamkin, 1996). The approach proposes  the use of resonant dielectric spheres, slightly detuned from each other, to provide a beat-frequency downshift of the more energetic high-frequency components of the ZPE to a more easily captured form. We are discussing the possibility of a collaborative effort between us to determine whether such an approach is feasible. Finally, an approach utilizing micro-cavity techniques to perturb the ground state stability of atomic hydrogen is under consideration in our lab. It is based on a paper of mine (Puthoff, 1987) in which I put forth the hypothesis that then on radiative nature of the ground state  is due to a dynamic equilibrium in which radiation emitted due to accelerated electron ground state motion is compensated by absorption from the ZPE. If this hypothesis is true, there exists the potential for energy generation by the application of the techniques of so-called cavity quantum electrodynamics(QED). In cavity QED, excited atoms are passed through Casimir-like cavities whose structure suppresses electromagnetic cavity modes at the transition frequency between the atom’s excited and ground states. The result is that the so-called “spontaneous” emission time is lengthened considerably (for example, by factors of ten), simply because spontaneous emission is not so spontaneous after all, but rather is driven by vacuum fluctuations. Eliminate the modes, and you eliminate the zero point fluctuations of the modes, hence suppressing decay of the excited state. As stated in a review article on cavity QED in Scientific American, “An excited atom that would ordinarily emit a low-frequency photon can not do so, because there are no vacuum fluctuations to stimulate its emission….In its application to energy generation, mode suppression would be used to perturb the hypothesized dynamic ground state absorption/emission balance to lead to energy release.

An example in which Nature herself may have taken advantage of energetic vacuum effects is discussed in a model published by ZPE colleagues A. Rueda of California State University at Long Beach, B. Haisch of Lockheed-Martin,and D. Cole of IBM (1995). In a paper published in the Astrophysical Journal,they propose that the vast reaches of outer space constitute an ideal environment for ZPE acceleration of nuclei and thus provide a mechanism for “powering up” cosmic rays. Details of the model would appear to account for other observed phenomena as well, such as the formation of cosmic voids. This raises the possibility of utilizing a ” sub-cosmic-ray” approach to accelerate protons in a cryogenically-cooled, collision-free vacuum trap and thus extract energy from the vacuum fluctuations by this mechanism.

The Vacuum as the Source of Gravity and Inertia

What of the fundamental forces of gravity and inertia that we seek to overcome in space travel? We have phenomenological theories that describe their effects(Newton’s Laws and their relativistic generalizations), but what of their origins?

The first hint that these phenomena might themselves be traceable to roots in the underlying fluctuations of the vacuum came in a study published by the well-known Russian physicist Andrei Sakharov. Searching to derive Einstein’s phenomenological equations for general relativity from a more fundamental set of assumptions, Sakharov came to the conclusion that the entire panoply of general relativistic phenomena could be seen as induced effects brought about by changes  in the quantum-fluctuation energy of the vacuum due to the presence of matter. In this view the attractive gravitational force is more akin to the induced Casimir force discussed above, than to the fundamental inverse square law Coulomb force between charged particles with which it is often compared. Although speculative when first introduced by Sakharov this hypothesis has led to a rich and ongoing literature, including contributions of my own on quantum-fluctuation-induced gravity, aliterature that continues to yield deep insight into the role played by vacuum forces.

Given an apparent deep connection between gravity and the zero-point fluctuations of the vacuum, a similar connection must exist between these self same vacuum fluctuations and inertia. This is because it is an empirical fact that the gravitational and inertial masses have the same value, even though the underlying phenomena are quite disparate. Why, for example, should a measure of the resistance of a body to being accelerated, even if far from any gravitational  field, have the same value that  is associated with the gravitational attraction between bodies? Indeed, if one is determined by vacuum fluctuations, so must the other. To get to the heart of inertia, consider a specific example in which you are standing on a train in the station. As the train leaves the platform with a jolt, you could be thrown to the  floor. What  is  this force  that knocks you down,seemingly coming out of nowhere? This phenomenon, which we conveniently label inertia and go on about our physics, is a subtle feature of the universe that has perplexed generations of physicists from Newton to Einstein. Since in this example the sudden disquieting imbalance results from acceleration “relative to the fixed stars,” in its most provocative  form one could say that it was the “stars” that delivered the punch. This key feature was emphasized by the Austrian philosopher of science Ernst Mach, and is now known as Mach’s Principle. Nonetheless, the mechanism by which the stars might do this deed has eluded convincing explication.

Addressing this issue in a paper entitled “Inertia as a Zero-Point Field Lorentz Force,” my colleagues and I (Haisch, Rueda & Puthoff, 1994) were successful in tracing the problem of inertia and its connection to Mach’s Principle to the ZPE properties of the vacuum. In a sentence, although a uniformly moving body does not experience a drag  force from  the  (Lorentz-invariant)vacuum fluctuations, an accelerated body meets a resistance (force) proportional to the acceleration. By accelerated we mean, of course, accelerated relative to the fixed stars. It turns out that an argument can be made that the quantum fluctuations of distant matter structure the local vacuum-fluctuation frame of reference. Thus, in the example of the train the punch was delivered by the wall of vacuum fluctuations acting as a proxy for the fixed stars through which one attempted to accelerate.

The implication for space travel is this: Given the evidence generated in the field of cavity QED (discussed above), there is experimental evidence that vacuum  fluctuations can be altered by technological means. This leads to the corollary that, in principle, gravitational and inertial masses can also be altered. The possibility of altering mass with a view to easing the energy burden of future spaceships has been seriously considered by the Advanced Concepts Office of the Propulsion Directorate of the Phillips Laboratory at Edwards AirForce Base. Gravity researcher Robert Forward accepted an assignment to review this concept. His deliverable product was to recommend a broad, multipronged ef fort involving laboratories from around the world to investigate the inertia model experimentally. The Abstract reads in part:

Many researchers see the vacuum as a central ingredient of 21st-Century physics ….Some even believe the vacuum may be harnessed to provide a limitless supply of energy. This report summarizes an attempt to find an experiment that would test the Haisch,Rueda and Puthoff (HRP) conjecture that the mass and inertia of a body are induced effects brought about by changes in the quantum-fluctuation energy of the vacuum…. It was possible to find an experiment that might be able to prove or disprove that the inertial mass of a body can be altered by making changes in the vacuum surrounding the body.

With regard to action items, Forward in fact recommends a ranked list of not one but four experiments to be carried out to address the ZPE-inertia conceptand its broad implications. The recommendations included investigation of the proposed “sub-cosmic-ray energy device” mentioned earlier, and the investigation of an hypothesized “inertia-wind” effect proposed by our laboratory and possibly detected in early experimental work,though the latter possibility is highly speculative at this point.

Engineering the Vacuum For “Warp Drive”

Perhaps one of the most speculative, but nonetheless scientifically-grounded, proposals of all is the so-called Alcubierre Warp Drive. Taking on the challenge of determining whether Warp Drive a là Star Trek was a scientific possibility, general relativity theorist Miguel Alcubierre of the University of  Wales set himself  the task of determining whether faster-than light travel was possible within the constraints of standard theory. Although such clearly could not be the case in the flat space of special relativity, general relativity permits consideration of altered space time metrics where such a possibility is not a priori ruled out. Alcubierre ’s further self-imposed constraints on an acceptable solution included the requirements that no net time distortion should occur (breakfast on Earth, lunch on Alpha Centauri, and home for dinner with your wife and children, not your great-great-great grandchildren), and that the occupants of the spaceship were not to be flattened against the bulkhead by unconscionable accelerations.

A solution meeting all of the above requirements was found and published by Alcubierre in Classical and Quantum Gravity in 1994. The solution discovered by Alcubierre involved the creation of a local distortion of space time such that space time is expanded behind the spaceship, contracted ahead of it, and yields a hypersurfer-like motion faster than the speed of light as seen by observers outside the disturbed region. In essence, on the outgoing leg of its journey the spaceship is pushed away from Earth and pulled towards its distant destination by the engineered local expansion of space time itself. For followup on the broader aspects of “metric engineering” concepts, one can refer to apaper published  by myself  in Physics Essays (Puthoff,  1996). Interestingly enough, the engineering requirements rely on the generation of macroscopic,negative-energy-density, Casimir-like states in the quantum vacuum of the type discussed earlier. Unfortunately,meeting such requirements is beyond technological reach without some unforeseen breakthrough.

Related, of course, is the knowledge that general relativity permits the possibility of wormholes, topological tunnels which in principle could connect distant parts of the universe, a cosmic subway so to speak. Publishing in the American Journal of Physics, theorists Morris and Thorne initially outlined in some detail the requirements for traversible wormholes and have found that, in principle, the possibility exists provided one has access to Casimir-like, negative-energy-density quantum vacuum states. This has led to a rich literature, summarized recently in a book by Matt Visser of Washington University. Again, the technological requirements appear out of reach for the foreseeable future, perhaps awaiting new techniques for cohering the ZPE vacuum fluctuations in order to meet the energy-density requirements.

Where does this leave us? As we peer into the heavens from the depth of our gravity well, hoping for some “magic” solution that will launch our spacefarers first to the planets and then to the stars, we are reminded of Arthur C. Clarke’s phrase that highly-advanced technology is essentially indistinguishable from magic. Fortunately, such magic appears to be waiting in the wings of our deepening understanding of the quantum vacuum in which we live.

[Ad: Can the Vacuum Be Engineered for Space Flight Applications? Overview of  Theory and Experiments By H. E. Puthoff(PDF)]

New Propulsion System for Robotic LanderPrototype

NASA’s Robotic Lunar Lander Development Project at Marshall Space Flight Center has completed a series of hot fire tests and taken delivery of a new propulsion system for integration into a more sophisticated free-flying autonomous robotic lander prototype. The project is partnered with the Johns Hopkins University Applied Physics Laboratory to develop a new generation of small, smart, versatile robotic landers to achieve scientific and exploration goals on the surface of the moon and near-Earth asteroids.The new robotic lander prototype will continue to mature the development of a robotic lander capability by bringing online an autonomous flying test lander that will be capable of flying up to sixty seconds, testing the guidance, navigation and control system by demonstrating a controlled landing in a simulated low gravity environment.

By the spring of 2011, the new prototype lander will begin flight tests at the U.S. Army’s Redstone Arsenal Test Center. The prototype’s new propulsion system consists of 12 small attitude control thrusters, three primary descent thrusters to control the vehicle’s altitude, and one large “gravity-canceling” thruster which offsets a portion of the prototype’s weight to simulate a lower gravity environment, like that of the moon and asteroids. The prototype uses a green propellant, hydrogen peroxide, ina stronger concentration of a solution commonly used in homes as a disinfectant. The by-products after use are water and oxygen.

The propulsion hardware acceptance test consisted of a series of tests that verified the performance of each thruster in the propulsion system. The series culminated in a test that characterized the entire system by running a scripted set of thruster firings based on a flight scenario simulation.
The propulsion system is currently at Teledyne Brown’s manufacturing facility in Huntsville, for integration with the structure and avionics to complete the new robotic lander prototype. Dynetics Corp. developed the robotic lander prototype propulsion system under the management of the Von Braun Center for Science andInnovation both located in Huntsville,

This is the second phase of a robotic lander prototype development program. Our initial “cold gas” prototype was built, delivered and successfully flight tested at the Marshall Center in a record nine months, providing a physical and tangible demonstration of capabilities related to the critical terminal descent and landing phases for an airless body mission.

The first robotic lander prototype has a record flight time of ten seconds and descended from three meters altitude. This first robotic lander prototype began flight tests in September 2009 and has completed 142 flight tests, providing a platform to develop and test algorithms, sensors, avionics, ground and flight software and ground systems to support autonomous landings on airless bodies, whereaero-braking and parachutes are not option.

[Source: Nasa]