Superluminal Speed in Gases..!!

Scientists have apparently broken the universe’s speed limit. For generations, physicists believed there is nothing faster than light moving through a vacuum – a speed of 186,000 miles per second. But in an experiment in Princeton, N.J., physicists sent a pulse of laser light through cesium vapor so quickly that it left the chamber before it had even finished entering. The pulse traveled 310 times the distance it would have covered if the chamber had contained a vacuum.

This seems to contradict not only common sense, but also a bedrock principle of Albert Einstein’s theory of relativity, which sets the speed of light in a vacuum, about 186,000 miles per second, as the fastest that anything can go. But the findings–the long-awaited first clear evidence of faster-than-light motion–are “not at odds with Einstein,” said Lijun Wang, who with colleagues at the NEC Research Institute in Princeton, N.J., report their results in today’s issue of the journal Nature.

“However,” Wang said, “our experiment does show that the generally held misconception that ‘nothing can move faster than the speed of light’ is wrong.” Nothing with mass can exceed the light-speed limit. But physicists now believe that a pulse of light–which is a group of massless individual waves–can.

To demonstrate that, the researchers created a carefully doctored vapor of laser-irradiated atoms that twist, squeeze and ultimately boost the speed of light waves in such abnormal ways that a pulse shoots through the vapor in about 1/300th the time it would take the pulse to go the same distance in a vacuum.

As a general rule, light travels more slowly in any medium more dense than a vacuum (which, by definition, has no density at all). For example, in water, light travels at about three-fourths its vacuum speed; in glass, it’s around two-thirds. The ratio between the speed of light in a vacuum and its speed in a material is called the refractive index. The index can be changed slightly by altering the chemical or physical structure of the medium. Ordinary glass has a refractive index around 1.5. But by adding a bit of lead, it rises to 1.6. The slower speed, and greater bending, of light waves accounts for the more sprightly sparkle of lead crystal glass.

The NEC researchers achieved the opposite effect, creating a gaseous medium that, when manipulated with lasers, exhibits a sudden and precipitous drop in refractive index, Wang said, speeding up the passage of a pulse of light. The team used a 2.5-inch-long chamber filled with a vapor of cesium, a metallic element with a goldish color. They then trained several laser beams on the atoms, putting them in a stable but highly unnatural state.

In that condition, a pulse of light or “wave packet” (a cluster made up of many separate interconnected waves of different frequencies) is drastically reconfigured as it passes through the vapor. Some of the component waves are stretched out, others compressed. Yet at the end of the chamber, they recombine and reinforce one another to form exactly the same shape as the original pulse, Wang said. “It’s called re-phasing.”

The key finding is that the reconstituted pulse re-forms before the original intact pulse could have gotten there by simply traveling though empty space. That is, the peak of the pulse is, in effect, extended forward in time. As a result, detectors attached to the beginning and end of the vapor chamber show that the peak of the exiting pulse leaves the chamber about 62 billionths of a second before the peak of the initial pulse finishes going in.That is not the way things usually work. Ordinarily, when sunlight–which, like the pulse in the experiment, is a combination of many different frequencies–passes through a glass prism, the prism disperses the white light’s components.

Illustration of wavefronts in the context of S...

Image via Wikipedia

This happens because each frequency moves at a different speed in glass, smearing out the original light beam. Blue is slowed the most, and thus deflected the farthest; red travels fastest and is bent the least. That phenomenon produces the familiar rainbow spectrum.

But the NEC team’s laser-zapped cesium vapor produces the opposite outcome. It bends red more than blue in a process called “anomalous dispersion,” causing an unusual reshuffling of the relationships among the various component light waves. That’s what causes the accelerated re-formation of the pulse, and hence the speed-up. In theory, the work might eventually lead to dramatic improvements in optical transmission rates. “There’s a lot of excitement in the field now,” said Steinberg. “People didn’t get into this area for the applications, but we all certainly hope that some applications can come out of it. It’s a gamble, and we just wait and see.”

[Source: Time Travel Research Centre]

Can the Vacuum Be Engineered for Space Flight Applications? Overview of Theory and Experiments

A Feynman diagram showing the radiation of a g...

Image via Wikipedia

By H. E. Puthoff

Quantum theory predicts, and experiments verify, that empty space (the vacuum) contains an enormous residual background energy known as zero-point energy (ZPE). Originally thought to be of significance only for such esoteric concerns as small perturbations to atomic emission processes, it is now known to play a role in large-scale phenomena of interest to technologists as well, such as the inhibition of spontaneous emission, the generation of short-range attractive forces (e.g., the Casimir force), and the possibility of accounting for sonoluminescence phenomena. ZPE topics of interest for spaceflight applications range from fundamental issues (where does inertia come from, can it be controlled?), through laboratory attempts toextract useful energy from vacuum fluctuations (can the ZPE be “mined” for practical use?), to scientifically grounded extrapolations concerning “engineering the vacuum” (is “warp-drive” space propulsion a scientific possibility?). Recent advances in research into the physics of the underlying  ZPE indicate the possibility of potential application in all these areas of interest.

The concept “engineering the vacuum” was first introduced by Nobel Laureate T. D. Lee  in his book Particle Physics and Introduction to Field Theory. As stated in Lee’s book: ” The experimental method to alter the properties of the vacuum may be called vacuum engineering…. If  indeed we are able to alter the vacuum, then we may encounter some new phenomena, totally unexpected.” Recent experiments have indeed shown this to be the case.

With regard  to space propulsion,  the question of engineering the vacuum can be put succinctly: ” Can empty space itself provide the solution?” Surprisingly enough, there are hints that potential help may in fact emerge quite literally out of the vacuum of so-called ” empty space.” Quantum theory  tells us that empty space is not truly empty, but rather is the seat of myriad energetic quantum processes  that  could have profound  implications  for  future  space travel. To understand these  implications it will serve us to review briefly the historical development of the scientific view of what constitutes empty space.

At the time of the Greek philosophers, Democritus argued that empty space was truly a void, otherwise there would not be room for the motion of atoms. Aristotle, on the other hand, argued equally forcefully that what appeared to be empty space was in fact a plenum (a background filled with substance), for did not heat and light travel from place to place as if carried by some kind of medium? The argument went back and forth through the centuries until finally codified by Maxwell’s  theory of  the  luminiferous  ether, a plenum  that  carried electromagnetic waves, including light, much as water carries waves across its surface. Attempts  to measure  the properties of  this ether, or to measure the Earth’s velocity through the ether (as in the Michelson-Morley experiment), however, met with failure. With the rise of special relativity which did not require reference to such an underlying substrate, Einstein in 1905 effectively banished the ether in favor of the concept that empty space constitutes a true void. Ten years later, however, Einstein’s own development of the general theory of relativity with its  concept of curved space and distorted geometry forced him to reverse his stand and opt for a richly-endowed plenum, under the new label spacetime metric.

It was the advent of modern quantum theory, however, that established the quantum vacuum, so-called empty space, as a very active place, with particlesarising and disappearing, a virtual plasma, and fields continuously fluctuatingabout their zero baseline values. The energy associated with such processes iscalled zero-point energy (ZPE), reflecting the fact that such activity remains even at absolute zero.

The Vacuum As A Potential Energy Source

At its most fundamental level, we now recognize that the quantum vacuum is an enormous reservoir of untapped energy, with energy densities conservatively estimated by Feynman and Hibbs to be on the order of nuclear energy densities or greater. Therefore, the question is, can the ZPE be “mined” for practical use? If so, it would constitute a virtually ubiquitous energy supply, a veritable “Holy Grail” energy source for space propulsion.

As utopian as such a possibility may seem, physicist Robert Forward at Hughes Research Laboratories demonstrated proof-of-principle in apaper,  “Extracting Electrical Energy from the Vacuum by Cohesion of Charged Foliated Conductors.” Forward’s approach exploited a phenomenon called the Casimir Effect, an attractive quantum force between closely-spaced metal plates, named for its discoverer, H. G. B. Casimir of Philips Laboratories in the Netherlands. The Casimir force, recently measured with high accuracy by S. K. Lamoreaux at the University of Washington, derives from partial shielding of  the interior region of the plates from the background zero-point fluctuations of the vacuum electromagnetic field. As shown by Los Alamos theorists Milonni  et  al., this shielding results in the plates being pushed together by the unbalanced ZPE radiation pressures. The result is a corollary conversion of vacuum energy to some other  form such as heat. Proof  that such a process violates neither energy nor  thermodynamic constraints can be found in a paper by a colleague and myself (Cole & Puthoff ) under the title “Extracting Energy and Heat from the Vacuum.”

Attempts to harness the Casimir and related effects for vacuum energy conversion are ongoing in our laboratory and elsewhere. The fact that its potential application to space propulsion has not gone unnoticed by the Air Force can be seen in its request for proposals for the FY-1986 Defense  SBIR Program. Under entry AF86-77, Air Force Rocket Propulsion  Laboratory (AFRPL), Topic: Non-Conventional Propulsion Concepts we f ind the statement: ” Bold,new non-conventional propulsion concepts are solicited…. The specific area sin which AFRPL is interested include…. (6) Esoteric  energy  sources  for propulsion  including  the zero point quantum dynamic energy of vacuum space.”

Several experimental formats for tapping the ZPE for practical use are under investigation in our laboratory. An early one of interest is based on the idea of a Casimir pinch effect in non-neutral plasmas, basically a plasma equivalent of Forward’s electromechanical charged-plate collapse. The underlying physics is described in a paper submitted for publication by myself and a colleague, and it is illustrative that the first of several patents issued to a consultant to our laboratory, K. R. Shoulders(1991), contains the descriptive phrase ” …energy  is provided… and the ultimate source of this energy appears to be the zero-point radiation of the vacuum continuum.” Another intriguing possibility is provided by the phenomenon of sonoluminescence, bubble collapse in an ultrasonically-driven fluid which is accompanied by intense, sub-nanosecond light radiation. Although the jury is still out as to the mechanism of light generation, Nobelist Julian Schwinger (1993) has argued for a Casimir interpretation. Possibly related experimental evidence for excess heat generation in ultrasonically-driven cavitation in heavy water is claimed in an EPRI Report by E-Quest Sciences ,although attributed to a nuclear micro-fusion process. Work is under way in our laboratory to see if this claim can be replicated.

Yet another proposal for ZPE extraction is described in a recent patent(Mead & Nachamkin, 1996). The approach proposes  the use of resonant dielectric spheres, slightly detuned from each other, to provide a beat-frequency downshift of the more energetic high-frequency components of the ZPE to a more easily captured form. We are discussing the possibility of a collaborative effort between us to determine whether such an approach is feasible. Finally, an approach utilizing micro-cavity techniques to perturb the ground state stability of atomic hydrogen is under consideration in our lab. It is based on a paper of mine (Puthoff, 1987) in which I put forth the hypothesis that then on radiative nature of the ground state  is due to a dynamic equilibrium in which radiation emitted due to accelerated electron ground state motion is compensated by absorption from the ZPE. If this hypothesis is true, there exists the potential for energy generation by the application of the techniques of so-called cavity quantum electrodynamics(QED). In cavity QED, excited atoms are passed through Casimir-like cavities whose structure suppresses electromagnetic cavity modes at the transition frequency between the atom’s excited and ground states. The result is that the so-called “spontaneous” emission time is lengthened considerably (for example, by factors of ten), simply because spontaneous emission is not so spontaneous after all, but rather is driven by vacuum fluctuations. Eliminate the modes, and you eliminate the zero point fluctuations of the modes, hence suppressing decay of the excited state. As stated in a review article on cavity QED in Scientific American, “An excited atom that would ordinarily emit a low-frequency photon can not do so, because there are no vacuum fluctuations to stimulate its emission….In its application to energy generation, mode suppression would be used to perturb the hypothesized dynamic ground state absorption/emission balance to lead to energy release.

An example in which Nature herself may have taken advantage of energetic vacuum effects is discussed in a model published by ZPE colleagues A. Rueda of California State University at Long Beach, B. Haisch of Lockheed-Martin,and D. Cole of IBM (1995). In a paper published in the Astrophysical Journal,they propose that the vast reaches of outer space constitute an ideal environment for ZPE acceleration of nuclei and thus provide a mechanism for “powering up” cosmic rays. Details of the model would appear to account for other observed phenomena as well, such as the formation of cosmic voids. This raises the possibility of utilizing a ” sub-cosmic-ray” approach to accelerate protons in a cryogenically-cooled, collision-free vacuum trap and thus extract energy from the vacuum fluctuations by this mechanism.

The Vacuum as the Source of Gravity and Inertia

What of the fundamental forces of gravity and inertia that we seek to overcome in space travel? We have phenomenological theories that describe their effects(Newton’s Laws and their relativistic generalizations), but what of their origins?

The first hint that these phenomena might themselves be traceable to roots in the underlying fluctuations of the vacuum came in a study published by the well-known Russian physicist Andrei Sakharov. Searching to derive Einstein’s phenomenological equations for general relativity from a more fundamental set of assumptions, Sakharov came to the conclusion that the entire panoply of general relativistic phenomena could be seen as induced effects brought about by changes  in the quantum-fluctuation energy of the vacuum due to the presence of matter. In this view the attractive gravitational force is more akin to the induced Casimir force discussed above, than to the fundamental inverse square law Coulomb force between charged particles with which it is often compared. Although speculative when first introduced by Sakharov this hypothesis has led to a rich and ongoing literature, including contributions of my own on quantum-fluctuation-induced gravity, aliterature that continues to yield deep insight into the role played by vacuum forces.

Given an apparent deep connection between gravity and the zero-point fluctuations of the vacuum, a similar connection must exist between these self same vacuum fluctuations and inertia. This is because it is an empirical fact that the gravitational and inertial masses have the same value, even though the underlying phenomena are quite disparate. Why, for example, should a measure of the resistance of a body to being accelerated, even if far from any gravitational  field, have the same value that  is associated with the gravitational attraction between bodies? Indeed, if one is determined by vacuum fluctuations, so must the other. To get to the heart of inertia, consider a specific example in which you are standing on a train in the station. As the train leaves the platform with a jolt, you could be thrown to the  floor. What  is  this force  that knocks you down,seemingly coming out of nowhere? This phenomenon, which we conveniently label inertia and go on about our physics, is a subtle feature of the universe that has perplexed generations of physicists from Newton to Einstein. Since in this example the sudden disquieting imbalance results from acceleration “relative to the fixed stars,” in its most provocative  form one could say that it was the “stars” that delivered the punch. This key feature was emphasized by the Austrian philosopher of science Ernst Mach, and is now known as Mach’s Principle. Nonetheless, the mechanism by which the stars might do this deed has eluded convincing explication.

Addressing this issue in a paper entitled “Inertia as a Zero-Point Field Lorentz Force,” my colleagues and I (Haisch, Rueda & Puthoff, 1994) were successful in tracing the problem of inertia and its connection to Mach’s Principle to the ZPE properties of the vacuum. In a sentence, although a uniformly moving body does not experience a drag  force from  the  (Lorentz-invariant)vacuum fluctuations, an accelerated body meets a resistance (force) proportional to the acceleration. By accelerated we mean, of course, accelerated relative to the fixed stars. It turns out that an argument can be made that the quantum fluctuations of distant matter structure the local vacuum-fluctuation frame of reference. Thus, in the example of the train the punch was delivered by the wall of vacuum fluctuations acting as a proxy for the fixed stars through which one attempted to accelerate.

The implication for space travel is this: Given the evidence generated in the field of cavity QED (discussed above), there is experimental evidence that vacuum  fluctuations can be altered by technological means. This leads to the corollary that, in principle, gravitational and inertial masses can also be altered. The possibility of altering mass with a view to easing the energy burden of future spaceships has been seriously considered by the Advanced Concepts Office of the Propulsion Directorate of the Phillips Laboratory at Edwards AirForce Base. Gravity researcher Robert Forward accepted an assignment to review this concept. His deliverable product was to recommend a broad, multipronged ef fort involving laboratories from around the world to investigate the inertia model experimentally. The Abstract reads in part:

Many researchers see the vacuum as a central ingredient of 21st-Century physics ….Some even believe the vacuum may be harnessed to provide a limitless supply of energy. This report summarizes an attempt to find an experiment that would test the Haisch,Rueda and Puthoff (HRP) conjecture that the mass and inertia of a body are induced effects brought about by changes in the quantum-fluctuation energy of the vacuum…. It was possible to find an experiment that might be able to prove or disprove that the inertial mass of a body can be altered by making changes in the vacuum surrounding the body.

With regard to action items, Forward in fact recommends a ranked list of not one but four experiments to be carried out to address the ZPE-inertia conceptand its broad implications. The recommendations included investigation of the proposed “sub-cosmic-ray energy device” mentioned earlier, and the investigation of an hypothesized “inertia-wind” effect proposed by our laboratory and possibly detected in early experimental work,though the latter possibility is highly speculative at this point.

Engineering the Vacuum For “Warp Drive”

Perhaps one of the most speculative, but nonetheless scientifically-grounded, proposals of all is the so-called Alcubierre Warp Drive. Taking on the challenge of determining whether Warp Drive a là Star Trek was a scientific possibility, general relativity theorist Miguel Alcubierre of the University of  Wales set himself  the task of determining whether faster-than light travel was possible within the constraints of standard theory. Although such clearly could not be the case in the flat space of special relativity, general relativity permits consideration of altered space time metrics where such a possibility is not a priori ruled out. Alcubierre ’s further self-imposed constraints on an acceptable solution included the requirements that no net time distortion should occur (breakfast on Earth, lunch on Alpha Centauri, and home for dinner with your wife and children, not your great-great-great grandchildren), and that the occupants of the spaceship were not to be flattened against the bulkhead by unconscionable accelerations.

A solution meeting all of the above requirements was found and published by Alcubierre in Classical and Quantum Gravity in 1994. The solution discovered by Alcubierre involved the creation of a local distortion of space time such that space time is expanded behind the spaceship, contracted ahead of it, and yields a hypersurfer-like motion faster than the speed of light as seen by observers outside the disturbed region. In essence, on the outgoing leg of its journey the spaceship is pushed away from Earth and pulled towards its distant destination by the engineered local expansion of space time itself. For followup on the broader aspects of “metric engineering” concepts, one can refer to apaper published  by myself  in Physics Essays (Puthoff,  1996). Interestingly enough, the engineering requirements rely on the generation of macroscopic,negative-energy-density, Casimir-like states in the quantum vacuum of the type discussed earlier. Unfortunately,meeting such requirements is beyond technological reach without some unforeseen breakthrough.

Related, of course, is the knowledge that general relativity permits the possibility of wormholes, topological tunnels which in principle could connect distant parts of the universe, a cosmic subway so to speak. Publishing in the American Journal of Physics, theorists Morris and Thorne initially outlined in some detail the requirements for traversible wormholes and have found that, in principle, the possibility exists provided one has access to Casimir-like, negative-energy-density quantum vacuum states. This has led to a rich literature, summarized recently in a book by Matt Visser of Washington University. Again, the technological requirements appear out of reach for the foreseeable future, perhaps awaiting new techniques for cohering the ZPE vacuum fluctuations in order to meet the energy-density requirements.

Where does this leave us? As we peer into the heavens from the depth of our gravity well, hoping for some “magic” solution that will launch our spacefarers first to the planets and then to the stars, we are reminded of Arthur C. Clarke’s phrase that highly-advanced technology is essentially indistinguishable from magic. Fortunately, such magic appears to be waiting in the wings of our deepening understanding of the quantum vacuum in which we live.

[Ad: Can the Vacuum Be Engineered for Space Flight Applications? Overview of  Theory and Experiments By H. E. Puthoff(PDF)]

Negative Energy: From Theory to Lab

Views of spacetime along the world line of a r...

Image via Wikipedia

Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut. The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.
One problem with Alcubierre’s original model that the interior of the warp bubble is causally disconnected from its forward edge. A starship captain on the inside cannot steer the bubble or turn it on or off; some external agency must set it up ahead of time. To get around this problem, Krasnikov proposed a “superluminal subway,” a tube of modified space-time (not the same as a wormhole) connecting Earth and a distant star. Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube. On the return journey, they could travel through it at warp speed. Like warp bubbles, the subway involves negative energy.

Almost every faster than light travel requires negative energies to be implemented at a very large densities. Today I came across a paper by E. W. Devis in which he described various experimental conditions under which negative energy can be generated in lab.

Examples of Exotic or “Negative” Energy

The exotic (energy condition-violating) mass-energy fields that are known to occur in nature are:� Radial electric or magnetic fields.

  • Radial electric or magnetic fields. These are borderline exotic, if their tension were infinitesimally larger,for a given energy density.�
  • Squeezed quantum states of the electromagnetic field and other squeezed quantum fields;
  • Gravitationally squeezed vacuum electromagnetic zero-point energy.
  • Other quantum fields/states/effects. In general, the local energy density in quantum field theory can benegative due to quantum coherence effects. Other examples that have been studied are Dirac field states: the superposition of two single particle electron states and thesuperposition of two multi-electron-positron states. In the former(latter), the energy densities can be negative when two single (multi-) particle states have the same number of electrons (electrons and positrons) or when one state has one more electron (electron-positron pair) than the other.

Since the laws of quantum field theory place no strong restrictions on negative energies and fluxes, then it might bepossible to produce gross macroscopic effects such as warp drive, traversable wormholes, violation of the second law of thermodynamics, and time machines. The above examples are representative forms of mass-energy that possess negativeenergy density or are borderline exotic.

Generating Negative Energy in Lab

Davis and Puthoff described various experiment to generate and detect negative energy in lab. Some of them are as:

1. Squeezed Quantum States: Substantial theoretical and experimental work has shown that in many quantum systems the limits to measurementprecision imposed by the quantum vacuum zero-point fluctuations (ZPF) can be breached by decreasing the noise inone observable (or measurable quantity) at the expense of increasing the noise in the conjugate observable; at thesame time the variations in the first observable, say the energy, are reduced below the ZPF such that the energybecomes “negative.” “Squeezing” is thus the control of quantum fluctuations and corresponding uncertainties,whereby one can squeeze the variance of one (physically important) observable quantity provided the variance in the(physically unimportant) conjugate variable is stretched/increased. The squeezed quantity possesses an unusuallylow variance, meaning less variance than would be expected on the basis of the equipartition theorem. One can inprinciple exploit quantum squeezing to extract energy from one place in the ordinary vacuum at the expense ofaccumulating excess energy elsewhere.

2. Gravitationally Squeezed Electromagnetic ZPF: A natural source of negative energy comes from the effect that gravitational fields (of astronomical bodies) in space have upon the surrounding vacuum. For example, the gravitational field of the Earth produces a zone of negative energy around it by dragging some of the virtual particle pairs (a.k.a. vacuum ZPF) downward. One can utilize the negative vacuum energy densities, which arise from distortion of the electromagnetic zero point fluctuations due to the interaction with a prescribed gravitational background, for providing a violation of the energy conditions. The squeezed quantum states of quantum optics provide a natural form of matter having negative energy density. The analysis, via quantum optics, shows that gravitation itself provides the mechanism for generating the squeezed vacuum states needed to support stable traversable wormholes. The production of negative energy densities via a squeezed vacuum is a necessary and unavoidable consequence of the interaction or coupling between ordinary matter and gravity, and this defines what is meant by gravitationally squeezed vacuum states.

The general result of the gravitational squeezing effect is that as the gravitational field strength increases, thenegative energy zone (surrounding the mass) also increases in strength. Table  shows when gravitational squeezing becomes important for example masses. The table shows that in the case of the Earth, Jupiter and the Sun, this squeeze effect is extremely feeble because only ZPF mode wavelengths above 0.2 m – 78 km are affected. For asolar mass black hole (radius of 2.95 km), the effect is still feeble because only ZPF mode wavelengths above 78 kmare affected. But also note in the table that Planck mass objects will have an enormously strong negative energyzone surrounding them because all ZPF mode wavelengths above 8.50* 10^-34 meters will be squeezed, in otherwords, all wavelengths of interest for vacuum fluctuations. Protons will have the strongest negative energy zone incomparison because the squeezing effect includes all ZPF mode wavelengths above 6.50* 10^-53 meters. Furthermore, a body smaller than a nuclear diameter (≈ 10^-16 m) and containing the mass of a mountain (≈10^11 kg)has a fairly strong negative energy zone because all ZPF mode wavelengths above 10􀀐15 meters will be squeezed.

We are presently unaware of any way to artificially generate gravitational squeezing of the vacuum in the laboratory. This will be left for future investigation. Aliens may help us!!

3. A Moving Mirror: Negative energy can be created by a single moving reflecting surface (a moving mirror). A mirror moving with increasing acceleration generates a flux of negative energy that emanates from its surface and flows out into the space ahead of the mirror. However, this effect is known to beexceedingly small, and it is not the most effective way to generate negative energy for our purposes.

4.Radial Electric/Magnetic Fields: It is beyond the scope to include all the technical configurations by which one can generate radialelectric or magnetic fields. Suffice it to say that ultrahigh-intensity table top lasers have been used to generate extreme electric and magnetic field strengths in the lab. Ultrahigh-intensity lasers use the chirped-pulse amplification (CPA) technique to boost the total output beam power. All laser systems simply repackage energy asa coherent package of optical power, but CPA lasers repackage the laser pulse itself during the amplification process. In typical high-power short-pulse laser systems, it is the peak intensity, not the energy or the fluence,which causes pulse distortion or laser damage. However, the CPA laser dissects a laser pulse according to itsfrequency components, and reorders it into a time-stretched lower-peak-intensity pulse of the same energy. This benign pulse can then be amplified safely to high energy, andthen only afterwards reconstituted as a very short pulse of enormous peak power – a pulse which could never itselfhave passed safely through the laser system (see Figure 2). Made more tractable in this way, the pulse can be amplified to substantial energies (with orders of magnitude greater peak power) without encountering intensity related problems.

The extreme output beam power, fields and physical conditions that have been achieved by ultrahigh-intensitytabletop lasers are:

  • Power Intensity 10^15 – 10^26 Watts/cm2 (10^30 W/cm2 using SLAC as a booster)
  • Peak Power Pulse ≤10^3 femtoseconds
  • E-fields ≈10^14 – 10^18 V/m [note: the critical quantum-electrodynamical (vacuum breakdown) field strengthis Ec = 2me^2c^3/he =10^18 V/m; me and e are the electron mass and charge]
  • B-fields ≈ several ≈10^6 Tesla [note: the critical quantum-electrodynamical (vacuum breakdown) field strength is Bc = Ec/c =10^10 Tesla]
  • Ponderomotive Acceleration of  Electrons ≈10^17 – 10^30 g’s (1 g = 9.81 m/sec2)
  • Light Pressure ≈10^9 – 10^15 bars
  • Plasma Temperatures > 10^10 K

Ultrahigh-intensity lasers can generate an electric field energy density of ~ 10^16 –10^28 J/m^3 and a magnetic field energy density of ~ 10^19 J/m^3. These energy densities are about the right order ofmagnitude to explore generating kilometer to AU sized wormholes. But that would be difficult to engineer on Earth. However, these energy densities are well above what would be required to explore the generation of micro wormholes in the lab.

5. Negative Energy from Squeezed Light: Negative energy can be generated by an array of ultrahigh intensity (femtosecond) lasers using an ultrafast rotatingmirror system. In this scheme  a laser beam is passed through an optical cavity resonator made of  lithium niobate (LiNbO3) crystal that is shaped like a cylinder with rounded silvered ends toreflect light. The resonator will act to produce a secondary lower frequency light beam in which the pattern ofphotons is rearranged into pairs. This is the quantum optical squeezing of light effect that we described previously.Therefore, the squeezed light beam emerging from the resonator will contain pulses of negative energy interspersedwith pulses of positive energy in accordance with the quantum squeezing model.

In this example both the negative and positive energy pulses are of ≈10^-15 second duration. We could arrange a setof rapidly rotating mirrors to separate the positive and negative energy pulses from each other. The light beam is tostrike each mirror surface at a very shallow angle while the rotation ensures that the negative energy pulses are reflected at a slightly different angle from the positive energy pulses. A small spatial separation of the two differentenergy pulses will occur at some distance from the rotating mirror. Another system of mirrors will be needed toredirect the negative energy pulses to an isolated location and concentrate them there. The rotating mirror system can actually be implemented via non-mechanical means.

Negative Energy Pulses Generated from Quantum Optical Squeezing. No other way to squeeze light would be to manufacture extremely reliable light pulses containing precisely one, two,or the squeezed (electromagnetic) vacuum state, the energy density is given by: placed within the squeezing cavity, and a laser beam is directed through the gas. The beam is reflected back onitself by a mirror to form a standing wave within the sodium chamber. This wave causes rapid variations in theoptical properties of the sodium thus causing rapid variations in the squeezed light so that we can induce rapid reflections of pulses by careful design.

On another note, when a quantum state is close to a squeezed vacuum state, there will almost always be some negative energy densities present, and the fluctuations in start to become nearly as large as the expectation value itself.

Observing Negative Energy in Lab:

Negative energy should be observable in lab experiments. The negative energy regions in space  is predicted to produce a unique signature corresponding to lensing, chromaticity and intensity effects in micro- and macro-lensing events on galactic and extragalactic/cosmological scales. It has been shown that these effects provide a specific signature that allows for discrimination between ordinary (positive mass-energy) and negative mass energy lenses via the spectral analysis of astronomical lensing events. Theoretical modeling of negative energylensing effects has led to intense astronomical searches for naturally occurring traversable wormholes in the universe. Computer model simulations and comparison of their results with recent satellite observations of gamma ray bursts (GRBs) has shown that putative negative energy (i.e., traversable wormhole)lensing events very closely resembles the main features of some GRBs. Current observational data suggests that large amounts of naturally occurring “exotic mass-energy” must have existed sometime between the epoch of galaxy formation and the present in order to (properly) quantitatively account for the“age-of-the-oldest-stars-in-the-galactic halo” problem and the cosmological evolution parameters.

When background light rays strike a negative energy lensing region, they are swept out of  the central region thus creating an umbra region of zero intensity. At the edges of the umbra the rays accumulate and create a rainbow-like caustic with enhanced light intensity. The lensing of a negative mass-energy region is not analogous to a diverging lens because in certain circumstances it can produce more light enhancement than does the lensing of an equivalent positive mass-energy region. Real background sources in lensing events can have non-uniform brightness distributions on their surfaces and a dependency of their emission with the observing frequency. These complications can result in chromaticity effects, i.e. in spectral changes induced by differential lensing during the event. The modeling of such effects is quite lengthy, somewhat model dependent, and with recent application only to astronomical lensing events. Suffice it to say that future work is necessary to scale down the predicted lensing parameters and characterize their effects for lab experiments in which the negative energy will not be of astronomical magnitude. Present ultrahigh-speed optics and optical cavities, lasers, photonic crystal (and switching)technology, sensitive nano-sensor technology, and other techniques are very likely capable of detecting the very small magnitude lensing effects expected in lab experiments.

Thus, it can be concluded as fact that negative energy can be generated in lab and will be no more referred as mere ‘fiction’. In a recent work it was suggested that naturally occurring wormholes can be detected since they attract charged particles which in turn, creates magnetic field. Well, I’ll review it and would present a model for realistic Warp Drive’.

[REF: Experimental Concepts for Generating Negative Energy in the Laboratory BY E. W. Devis and H. E. Puthoff]

Growing Crops on Other Planets

Science fiction lovers aren’t the only ones captivated by the possibility of colonizing another planet. Scientists are engaging in numerous research projects that focus on determining how habitable other planets are for life. Mars, for example, is revealing more and more evidence that it probably once had liquid water on its surface, and could one day become a home away from home for humans. 

“The spur of colonizing new lands is intrinsic in man,” said Giacomo Certini, a researcher at the Department of Plant, Soil and Environmental Science (DiPSA) at the University of Florence, Italy. “Hence expanding our horizon to other worlds must not be judged strange at all. Moving people and producing food there could be necessary in the future.” 

Humans traveling to Mars, to visit or to colonize, will likely have to make use of resources on the planet rather than take everything they need with them on a spaceship. This means farming their own food on a planet that has a very different ecosystem than Earth’s. Certini and his colleague Riccardo Scalenghe from the University of Palermo, Italy, recently published a study in Planetary and Space Science that makes some encouraging claims. They say the surfaces of Venus, Mars and the Moon appear suitable for agriculture. 

Defining Soil 

The surface of Venus, generated here using data from NASA’s Magellan mission, undergoes resurfacing through weathering processes such as volcanic activity, meteorite impacts and wind erosion. Credit: NASA

Before deciding how planetary soils could be used, the two scientists had to first explore whether the surfaces of the planetary bodies can be defined as true soil. 

“Apart from any philosophical consideration about this matter, definitely assessing that the surface of other planets is soil implies that it ‘behaves’ as a soil,” said Certini. “The knowledge we accumulated during more than a century of soil science on Earth is available to better investigate the history and the potential of the skin of our planetary neighbors.” 

One of the first obstacles in examining planetary surfaces and their usefulness in space exploration is to develop a definition of soil, which has been a topic of much debate. 

“The lack of a unique definition of ‘soil,’ universally accepted, exhaustive, and (one) that clearly states what is the boundary between soil and non-soil makes it difficult to decide what variables must be taken into account for determining if extraterrestrial surfaces are actually soils,” Certini said. 

At the proceedings of the 19th World Congress of Soil Sciences held in Brisbane, Australia, in August, Donald Johnson and Diana Johnson suggested a “universal definition of soil.” They defined soil as “substrate at or near the surface of Earth and similar bodies altered by biological, chemical, and/or physical agents and processes.” 

The surface of the Moon is covered by regolith over a layer of solid rock. Credit: NASA

On Earth, five factors work together in the formation of soil: the parent rock, climate, topography, time and biota (or the organisms in a region such as its flora and fauna). It is this last factor that is still a subject of debate among scientists. A common, summarized definition for soil is a medium that enables plant growth. However, that definition implies that soil can only exist in the presence of biota. Certini argues that soil is material that holds information about its environmental history, and that the presence of life is not a necessity. 

“Most scientists think that biota is necessary to produce soil,” Certini said. “Other scientists, me included, stress the fact that important parts of our own planet, such as the Dry Valleys of Antarctica or the Atacama Desert of Chile, have virtually life-free soils. They demonstrate that soil formation does not require biota.” 

The researchers of this study contend that classifying a material as soil depends primarily on weathering. According to them, a soil is any weathered veneer of a planetary surface that retains information about its climatic and geochemical history. 

On Venus, Mars and the Moon, weathering occurs in different ways. Venus has a dense atmosphere at a pressure that is 91 times the pressure found at sea level on Earth and composed mainly of carbon dioxide and sulphuric acid droplets with some small amounts of water and oxygen. The researchers predict that weathering on Venus could be caused by thermal process or corrosion carried out by the atmosphere, volcanic eruptions, impacts of large meteorites and wind erosion. 

Using the method of aeroponics, space travelers will be able to grow their own food without soil and using very little water. Credit: NASA

Mars is currently dominated by physical weathering caused by meteorite impacts and thermal variations rather than chemical processes. According to Certini, there is no active volcanism that affects the martian surface but the temperature difference between the two hemispheres causes strong winds. Certini also said that the reddish hue of the planet’s landscape, which is a result of rusting iron minerals, is indicative of chemical weathering in the past. 

On the Moon, a layer of solid rock is covered by a layer of loose debris. The weathering processes seen on the Moon include changes created by meteorite impacts, deposition and chemical interactions caused by solar wind, which interacts with the surface directly. 

Some scientists, however, feel that weathering alone isn’t enough and that the presence of life is an intrinsic part of any soil. 

“The living component of soil is part of its unalienable nature, as is its ability to sustain plant life due to a combination of two major components: soil organic matter and plant nutrients,” said Ellen Graber, researcher at the Institute of Soil, Water and Environmental Sciences at The Volcani Center of Israel’s Agricultural Research Organization. 

One of the primary uses of soil on another planet would be to use it for agriculture—to grow food and sustain any populations that may one day live on that planet. Some scientists, however, are questioning whether soil is really a necessary condition for space farming. 

Soilless Farming – Not Science Fiction 

With the Earth’s increasing population and limited resources, scientists are searching for habitable environments on places such as Mars, Venus and the Moon as potential sites for future human colonies. Credit: NASA

Growing plants without any soil may conjure up images from a Star Trek movie, but it’s hardly science fiction. Aeroponics, as one soilless cultivation process is called, grows plants in an air or mist environment with no soil and very little water. Scientists have been experimenting with the method since the early 1940s, and aeroponics systems have been in use on a commercial basis since 1983. 

“Who says that soil is a precondition for agriculture?” asked Graber. “There are two major preconditions for agriculture, the first being water and the second being plant nutrients. Modern agriculture makes extensive use of ‘soilless growing media,’ which can include many varied solid substrates.” 

In 1997, NASA teamed up with AgriHouse and BioServe Space Technologies to design an experiment to test a soilless plant-growth system on board the Mir Space Station. NASA was particularly interested in this technology because of its low water requirement. Using this method to grow plants in space would reduce the amount of water that needs to be carried during a flight, which in turn decreases the payload. Aeroponically-grown crops also can be a source of oxygen and drinking water for space crews. 

“I would suspect that if and when humankind reaches the stage of settling another planet or the Moon, the techniques for establishing soilless culture there will be well advanced,” Graber predicted. 

Soil: A Key to the Past and the Future 

The Mars Phoenix mission dug into the soil of Mars to see what might be hidden just beneath the surface. Credit:NASA/JPL-Caltech/University of Arizona/Texas A&M University

The surface and soil of a planetary body holds important clues about its habitability, both in its past and in its future. For example, examining soil features have helped scientists show that early Mars was probably wetter and warmer than it is currently. 

“Studying soils on our celestial neighbors means to individuate the sequence of environmental conditions that imposed the present characteristics to soils, thus helping reconstruct the general history of those bodies,” Certini said. 

In 2008, NASA’s Phoenix Mars Lander performed the first wet chemistry experiment using martian soil. Scientists who analyzed the data said the Red Planet appears to have environments more appropriate for sustaining life than was expected, environments that could one day allow human visitors to grow crops. 

“This is more evidence for water because salts are there,” said Phoenix co-investigator Sam Kounaves of Tufts University in a press release issued after the experiment. “We also found a reasonable number of nutrients, or chemicals needed by life as we know it.” 

Researchers found traces of magnesium, sodium, potassium and chloride, and the data also revealed that the soil was alkaline, a finding that challenged a popular belief that the martian surface was acidic. 

This type of information, obtained through soil analyses, becomes important in looking toward the future to determine which planet would be the best candidate for sustaining human colonies.

[Credit: Astrobiology Magazine]

Promising Applications of Carbon Nano Tubes(CNT)

Scientific research on carbon nanotubes witnessed a large expansion. The fact that CNTs can be produced by relative simple synthesis processes and their record breaking properties lead to numerous demonstration of many different types of applications ranging from building fast field effect transistors, flat screens, transparent electrodes, electrodes for rechargeable batteries, conducting polymer composites, bullet proof textiles and transparent loudspeakers.

As a result we have seen enormous progress in controlling growth of CNTs. CNTs with controlled diameter can be grown along a given direction parallel to the surface or perpendicular to the surface. In recent years we have seen narrow diameter CNTs with 2 and more walls to be grown at high yield.

Behind this progress one might forget about the remaining tantalizing challenges. Samples of CNTs still contain a large amount of disordered forms of carbon, catalytic metal particles or parts of the growth support are still a large fraction of the carbon nanotube mass. CNT as produced continue to contain a relative wide dispersion of diameters and lengths. To disperse CNTs and control their distribution in a matrix or on a given surface is still a challenge. There has been enormous progress in size selecting CNTs. However, the often applied techniques limit application to due to the presence of surfactant molecules or cannot be applied to larger volumes.

Analytical techniques are playing an important role when developing new synthesis, purification and separation processes. Screening of carbon nanotubes is essential for any real world application but is also essential for their fundamental understanding such as the understanding the effect of tube bundling, doping and the role of defects.

At the ‘Centre for materials elaboration and structural studies‘, Professor Wolfgang Bacsa and Pascal Puech and have much focused in screening CNTs with optical methods and developing physical processes for carbon nanotubes working closely with the materials chemists at different local institutions. We have much focused our attention on double wall carbon nanotubes grown form the catalytic chemical vapour deposition technique.

Their small diameter, high electrical conductivity and their large length as well as the fact that the inner wall is protected from the environment by the outer wall, are all good attributes for incorporating them in polymer composites. Depending on the synthesis process used we find the two walls are at times strongly or weakly coupled.

By studying their Raman spectra at high pressure, in acids, strongly photo excited or on individual tubes we can observe the effect on the internal and external walls. A good knowledge of Raman spectra of double wall CNTs gives us the opportunity to map the Raman signal of the ultra thin slices of composites and determine the distribution, agglomeration state and interaction with the matrix.


TEM images (V Tishkova CEMES-CNRS) of double wall (a), industrial multiwall carbon nanotubes (b) and Raman G band of double wall CNTs at high pressure in different pressure media revealing molecular nanoscal pressure effects7.

Working on individual CNTs in collaboration with Anna Swan of Boston University, gave us the opportunity to work on precisely positioned individual suspended CNTs. An individual CNT is an ideal point source and this can be used to map out the focal spot and to learn about the fundamental limitations of high resolution grating spectrometers.

The field of carbon nanotube research has grown enormously during the last decade making it difficult to follow all the new results in this field. It is quite clear that applications where macroscopic amounts of CNTs are needed, standardisation of measurement protocols, classification of CNT samples, combined with new processing techniques to deal with large CNT volumes will be needed. Applications where only minute quantities on a surface are used, suffer from the fact that no parallel processing is still limited. This shows further progress in growing CNT on surfaces is still needed although they have been a recent break through in growing CNT in a parallel fashion and with preferential seminconducting or metallic tubes.

[SOURCE: azonano]

Searching For Alien Life[Part-I] : Designing Organic Explorer

Habitable zone relative to size of stars

Image via Wikipedia

We probably already have the technology to find evidence of extraterrestrial life and to even send out evidence of our own. Given the room for a hard thinktank exercise, it only becomes just wishful thinking to contact aliens. But really the case is similar? Seti dictators are currently inclined to view alien hunting as contacting dogs by barking. Imagine a species of dog is trying to contact another species of dogs. How would they do it? By barking or howling, right? Would we notice that, as a signal of that type? Would we care? How much smarter, given some theoretical maximal potential, are we than dogs? WOW signal which was a odd and only of that type, detected in 1977, was ignored and regarded as uncredulous since it was never repeated since then. Was that a signal from aliens theoretically of the development level conforming to us?

In this series of articles(Searching For Alien Life), I’ll delve much into the chasm of infinite possibilities of intelligent extraterrestrial beings out there and would propose some groundbreaking and mind boggling technologies to search for alien life which are, however not so muddling. I may return with some old propositions of mine with new exotic supporting adherents to the tactics.

Current research seeks to understand how complexity arises from simplicity. Much progress has been made in the past few decades, but a good appreciation for some of the most important chemical steps that led to life still eludes us. That’s because life itself is extraordinarily complex, much more so than galaxies, stars, or planets. Consider for a moment the simplest known protein on the Earth. This is insulin, which has 51 amino acids linked in a specific order along a chain. Probability theory can be used to estimate the chances of assembling the correct number and order of amino acids for such a protein molecule. Since there are 20 different types of amino acids, the answer is 1/20^51, which equals ~1/10^66. This means thatthe 20 amino acids must be randomly assembled 1066, or a million trillion trillion trillion trillion trillion, times before getting insulin. This is obviously a great many combinations, so many in fact that we could randomly assemble the 20 aminoacids trillions of times per second for the entire history of the Universe and still not achieve the correct ordering of this protein. Larger proteins and nucleic acids would be even less probable if chemical evolution operates at random. And to assemble a human being would be vastly less probable, if it happened by chance starting only with atoms or simple molecules.
This is the type of reasoning used by some researchers to argue that we must be alone, or nearly so, in the Universe. They suggest that biology of any kind is a highly unlikely phenomenon. They argue that meaningful molecular complexity can be expected at only a very, very few locations in the Universe, and that Earth is one of these special places. And since, in their view, the fraction of habitable planets on which life arises is extremely small and intelligent beings almost improbable. All if their arguments are correct, we should be alone logically. Of all the myriad galaxies, stars, planets, and other wonderful aspects of the Universe, this viewpoint maintains that we are among very few creatures to appreciate the grandeur of it all.

Simulations that resemble conditions on primordial Earth are now routinely performed with a variety of energies and initial reactants (provided there’sno free oxygen). These experiments demonstrate that unique (or even rare) conditions are unnecessary to produce the precursors of life. Complex acids, bases, and proteinoid compounds are formed under a rather wide variety of physical conditions. And it doesn’t take long for these reasonably complex molecules to form, not nearly as long as probability theory predicts by randomly assembling atoms. Furthermore, every time this type of experiment is done, the results are much the same. The oily organic matter trapped in the test tube always yields the same proportion of acids, bases and rich proteinoids. If chemical evolution were entirely random, we might expect a different result each time the experiment is run. Apparently, electromagnetic forces do govern the complex interactions of the many atoms and molecules in the soupy sea, substituting organization for randomness. Of course, precursors of proteins and nucleic acids are a long way from life itself. But the beginnings of life as we know it seem to be the product of less-than-random interactions between atoms and molecules. This point of view is important to accumulate the possibility of radically different biorgasms in a typical alienated environment.

Alien Hunt
Current SETI methodologies implied to search for extraterrestrial life are abysmal and much peeking out. I’ve already described the probable guaranteed failure of contact through radio signal. It is better to send out a probe lassed with organic explorers.

SuperConductor of The Future

Futuristic ideas for the use of superconductors, materials that allow electric current to flow without resistance, are myriad: long-distance, low-voltage electric grids with no transmission loss; fast, magnetically levitated trains; ultra-high-speed supercomputers; superefficient motors and generators; inexhaustible fusion energy – and many others, some in the experimental or demonstration stages.

But superconductors, especially superconducting electromagnets, have been around for a long time. Indeed the first large-scale application of superconductivity was in particle-physics accelerators, where strong magnetic fields steer beams of charged particles toward high-energy collision points.

Accelerators created the superconductor industry, and superconducting magnets have become the natural choice for any application where strong magnetic fields are needed – for magnetic resonance imaging (MRI) in hospitals, for example, or for magnetic separation of minerals in industry. Other scientific uses are numerous, from nuclear magnetic resonance to ion sources for cyclotrons.

[Image Details: A close-up view of a superconducting magnet coil designed at Berkeley Lab, which may be used in a new kind of high-field dipole magnet for a future energy upgrade of CERN’s Large Hadron Collider. (Photo Roy Kaltschmidt) ]

Some of  the strongest and most complex superconducting magnets are still built for particle accelerators like CERN’s Large Hadron Collider (LHC). The LHC uses over 1,200 dipole magnets, whose two adjacent coils of superconducting cable create magnetic fields that bend proton beams traveling in opposite directions around a tunnel 27 kilometers in circumference; the LHC also has almost 400 quadrupole magnets, whose coils create a field with four magnetic poles to focus the proton beams within the vacuum chamber and guide them into the experiments.

These LHC magnets use cables made of superconducting niobium titanium (NbTi), and for five years during its construction the LHC contracted for more than 28 percent of the world’s niobium titanium wire production, with significant quantities of NbTi also used in the magnets for the LHC’s giant experiments.

What’s more, although the LHC is still working to reach the energy for which it was designed, the program to improve its future performance is already well underway.

Designing the future

Enabling the accelerators of the future depends on developing magnets with much greater field strengths than are now possible. To do that, we’ll have to use different materials.

Field strength is limited by the amount of current a magnet coil can carry, which in turn depends on physical properties of the superconducting material such as its critical temperature and critical field. Most superconducting magnets built to date are based on NbTi, which is a ductile alloy; the LHC dipoles are designed to operate at magnetic fields of about eight tesla, or 8 T. (Earth’s puny magnetic field is measured in mere millionths of a tesla.)

The LHC Accelerator Research Program (LARP) is a collaboration among DOE laboratories that’s an important part of U.S. participation in the LHC. Sabbi heads both the Magnet Systems component of LARP and Berkeley Lab’s Superconducting Magnet Program. These programs are currently developing accelerator magnets built with niobium tin (Nb3Sn), a brittle material requiring special fabrication processes but able to generate about twice the field of niobium titanium. Yet the goal for magnets of the future is already set much higher.

Among the most promising new materials for future magnets are some of the high-temperature superconductors. Unfortunately they’re very difficult to work with.” One of the most promising of all is the high-temperature superconductor Bi-2212 (bismuth strontium calcium copper oxide).

[Image Details: In the process called “wind and react,” Bi-2212 wire – shown in cross section, upper right, with the powdered superconductor in a matrix of silver – is woven into flat cables, the cables are wrapped into coils, and the coils are gradually heated in a special oven (bottom).]

“High temperature” is a relative term. It commonly refers to materials that become superconducting above the boiling point of liquid nitrogen, a toasty 77 kelvin (95 K, or -321 degrees Fahrenheit). But in high-field magnets even high-temperature superconductors will be used at low temperatures. Bi-2212 shows why: although it becomes superconducting at 95 K, its ability to carry high currents and thus generate a high magnetic field increases as the temperature is lowered, typically down to 4.2 K, the boiling point of liquid helium at atmospheric pressure.

In experimental situations Bi-2212 has generated fields of 25 T and could go much higher. But like many high-temperature superconductors Bi-2212 is not a metal alloy but a ceramic, virtually as brittle as a china plate.

As part of the Very High Field Superconducting Magnet Collaboration, which brings together several national laboratories, universities, and industry partners, Berkeley Lab’s program to develop new superconducting materials for high-field magnets recently gained support from the American Recovery and Reinvestment Act (ARRA).

Under the direction of Daniel Dietderich and Arno Godeke, AFRD’s Superconducting Magnet Program is investigating Bi-2212 and other candidate materials. One of the things that makes Bi-2212 promising is that it is now available in the form of round wires.

“The wires are essentially tubes filled with tiny particles of ground-up Bi-2212 in a silver matrix,” Godeke explains. “While the individual particles are superconducting, the wires aren’t – and can’t be, until they’ve been heat treated so the individual particles melt and grow new textured crystals upon cooling – thus welding all of the material together in the right orientation.”

Orientation is important because Bi-2212 has a layered crystalline structure in which current flows only through two-dimensional planes of copper and oxygen atoms. Out of the plane, current can’t penetrate the intervening layers of other atoms, so the copper-oxygen planes must line up if current is to move without resistance from one Bi-2212 particle to the next.

In a coil fabrication process called “wind and react,” the wires are first assembled into flat cables and the cables are wound into coils. The entire coil is then heated to 888 degrees Celsius (888 C) in a pure oxygen environment. During the “partial melt” stage of the reaction, the temperature of the coil has to be controlled to within a single degree. It’s held at 888 C for one hour and then slowly cooled.

Silver is the only practical matrix material that allows the wires to “breathe” oxygen during the reaction and align their Bi-2212 grains. Unfortunately 888 C is near the melting point of silver, and during the process the silver may become too soft to resist high stress, which will come from the high magnetic fields themselves: the tremendous forces they generate will do their best to blow the coils apart. So far, attempts to process coils have often resulted in damage to the wires, with resultant Bi-2212 current leakage, local hot spots, and other problems. Dietderich says:

The goal of the program to develop Bi-2212 for high-field magnets is to improve the entire suite of wire, cable, coil making, and magnet construction technologies, magnet technologies are getting close, but the wires are still a challenge. For example, we need to improve current density by a factor of three or four.

Once the processing steps have been optimized, the results will have to be tested under the most extreme conditions. Instead of trying to predict coil performance from testing a few strands of wire and extrapolating the results, we need to test the whole cable at operating field strengths. To do this we employ subscale technology: what we can learn from testing a one-third scale structure is reliable at full scale as well.

Testing the results

The LD1 test magnet design in cross section. The 100 by 150 millimeter rectangular aperture, center, is enclosed by the coils, then by iron pressure pads, and then by the iron yoke segments. The outer diameter of the magnet is 1.36 meters.

Enter the second part of ARRA’s support for future magnets, directed at the Large Dipole Testing Facility. 

“The key element is a test magnet with a large bore, 100 millimeters high by 150 millimeters wide – enough to insert pieces of cable and even miniature coils, so that we can test wires and components without having to build an entire magnet every time,” says AFRD’s Paolo Ferracin, who heads the design of the Large Dipole test magnet.

Called LD1, the test magnet will be based on niobium-tin technology and will exert a field of up to 15 T across the height of the aperture. Inside the aperture, two cable samples will be arranged back to back, running current in opposite directions to minimize the forces generated by interaction between the sample and the external field applied by LD1.

The magnet itself will be about two meters long, mounted vertically in a cryostat underground. LD1’s coils will be cooled to 4.5 K, but a separate cryostat in the bore will allow samples to be tested at temperatures of 10 to 20 K.

“There are two aspects to the design of LD1,” says Ferracin. “The magnetic design deals with how to put the conductors around the aperture to get the field you want. Then you need a support structure to deal with the tremendous forces you create, which is a matter of mechanical design.” LD1 will generate horizontal forces equivalent to the weight of 10 fully loaded 747s; imagine hanging them all from a two-meter beam and requiring that the beam not move more than a tenth of a millimeter.

At top, two superconducting coils enclose a beam pipe. Field strength is indicated by color, with greatest strength in deep red. To test components of such an arrangement, subscale coils (bottom) will be assessed, starting with only half a dozen cable winds generating a modest two or three tesla, increasing to hybrid assemblies capable of generating up to 10 T.

Since one of the most important aspects of cables and model coils is their behavior under stress, we need to add mechanical pressure up to 200 megapascals” – 30,000 pounds per square inch. “We have developed clamping structures that can provide the required force, but devising a mechanism that can apply the pressure during a test will be another major challenge.

The cable samples and miniature coils will incorporate built-in voltage taps, strain gauges, and thermocouples so their behavior can be checked under a range of conditions, including quenches – sudden losses of superconductivity and the resultant rapid heating, as dense electric currents are dumped into conventional conductors like aluminum or copper. The design of the LD1 is based on Berkeley Lab’s prior success building high-field dipole magnets, which hold the world’s record for high-energy physics uses. The new test facility will allow testing the advanced designs for conductors and magnets needed for future accelerators like the High-Energy LHC and the proposed Muon Collider.

These magnets are being developed to make the highest-energy colliders possible. But as we have seen in the past, the new technology will benefit many other fields as well, from undulators for next-generation light sources to more compact medical devices. ARRA’s support for LD1 is an investment in the nation’s science and energy future.

[Source: Berkley Lab]

Hyperluminal Travel Without Exotic Matter

Listen terrestrial intelligent species: WeirdSciences is going to delve into a new idea of making interstellar travel feasible and this time no negative energy is to be used to propel spacecraft. Though implementing  negative energy to make warp drive is not that bad, but you need to refresh your mind.

By Eric Baird

Alcubierre’s 1994 paper on hyperfast travel has generated fresh interest in the subject of warp drives but work on the subject of hyper-fast travel is often hampered by confusion over definitions — how do we define times and speeds over extended regions of spacetime where the geometry is not Euclidean? Faced with this problem it may seem natural to define a spaceship’s travel times according to round-trip observations made from the traveller’s point of origin, but this “round-trip” approach normally requires signals to travel in two opposing spatial directions through the same metric, and only gives us an unambiguous reading of apparent shortened journey-times if the signal speeds are enhanced in both directions along the signal path, a condition that seems to require a negative energy density in the region. Since hyper-fast travel only requires that the speed of light-signals be enhanced in the actual direction of travel, we argue that the precondition of bidirectionality (inherited from special relativity, and the apparent source of the negative energy requirement), is unnecessary, and perhaps misleading.

When considering warp-drive problems, it is useful to remind ourselves of what it is that we are trying to accomplish. To achieve hyper-fast package delivery between two physical markers, A (the point of origin)  and B (the destination), we require that a package moved from A to B:

a) . . . leaves A at an agreed time according to clocks at A,
b) . . . arrives at B as early as possible according to clocks at B, and, ideally,
c) . . . measures their own journey time to be as short as possible.

From a purely practical standpoint as “Superluminal Couriers Inc.”, we do not care how long the arrival event takes to be seen back at A, nor do we care whether the clocks at A and B appear to be properly synchronised during the delivery process. Our only task is to take a payload from A at a specified “A time” and deliver it to B at possible “B time”, preferably without the package ageing excessively en route. If we can collect the necessary local time-stamps on our delivery docket at the various stages of the journey, we have achieved our objective and can expect payment from our customer.

Existing approaches tend to add a fourth condition:

d) . . . that the arrival-event at B is seen to occur as soon as possible by an observer back at A.

This last condition is much more difficult to meet, but is arguably more important to our ability to define distant time-intervals than to the actual physical delivery process itself. It does not dictate which events may be intersected by the worldline of the travelling object, but can affect the coordinate labels that we choose to assign to those events using special relativity.

  • Who Asked Your Opinion?

If we introduce an appropriate anisotropy in the speed of light to the region occupied by our delivery path, a package can travel to its destination along the path faster than “nominal background lightspeed” without exceeding the local speed of light along the path. This allows us to meet conditions 2(a) and 2(b), but the same isotropy causes an increased transit time for signals returning from B to A, so the “fast” outward journey can appear to take longer when viewed from A.

This behaviour can be illustrated by the extreme example of a package being delivered to the interior of a black hole from a “normal” region of spacetime. When an object falls through a gravitational event horizon, current theory allows its supposed inward velocity to exceed the nominal speed of light in the external environment, and to actually tend towards vINWARDS=¥ as the object approaches a black hole’s central singularity. But the exterior observer, A, could argue that the delivery is not only proceeding more slowly than usual, but that the apparent delivery time is actually infinite, since the package is never actually seen (by A) to pass through the horizon.

Should A’s low perception of the speed of the infalling object indicate that hyperfast travel has not been achieved? In the author’s opinion, it should not — if the package has successfully been collected from A and delivered to B with the appropriate local timestamps indicating hyperfast travel, then A’s subsequent opinion on how long the delivery is seen to take (an observation affected by the properties of light in a direction other than that of the travelling package) would not seem to be of secondary importance. In our “black hole” example, exotic matter or negative energy densities are not required unless we demand that an external observer should be able to see the package proceeding superluminally, in which case a negative gravitational effect would allow signals to pass back outwards through the r=2M surface to the observer at the despatch-point (without this return path, special relativity will tend to define the time of the unseen delivery event as being more-than-infinitely far into A’s future).

Negative energy density is required here only for appearances sake (and to make it easier for us to define the range of possible arrival-events that would imply that hyperfast travel has occurred), not for physical package delivery.

  • Hyperfast Return Journeys and Verification

It is all very well to be able to reach our destination in a small local time period, and to claim that we have travelled there at hyperfast speeds, but how do we convince others that our own short transit-time measurements are not simply due to time-dilation effects or to an “incorrect” concept of simultaneity? To convince observers at our destination, we only have to ask that they study their records for the observed behaviour of our point of origin — if the warpfield is applied to the entire journey-path (“Krasnikov tube configuration”), then the introduction and removal of the field will be associated with an increase and decrease in the rate at which signals from A arrive at B along the path (and will force special relativity to redefine the supposed simultaneity of events at B and A). If the warpfield only applies in the vicinity of the travelling package, other odd effects will be seen when the leading edge of the warpfield reaches the observer at B (the logical problems associated with the conflictng “lightspeeds” at the leading edge of a travelling warpfield wavefront been highlighted by Low, and will be discussed in a further paper). Our initial definitions of the distances involved should of course be based on measurements taken outside the region of spacetime occupied by the warpfield.

A more convincing way of demonstrating hyper-fast travel would be to send a package from A to B and back again in a shorter period of “A-time” than would normally be required for a round-trip light-beam. We must be careful here not to let our initial mathematical definitions get in the way of our task — although we have supposed that the speed of light towards B was slower while our warpdrive was operating on the outward journey, this artificially-reduced return speed does not have to also apply during our subsequent return trip, since we have the option of simply switching the warpdrive off, or better still, reversing its polarity for the journey home.

Although a single path allowing enhanced signal speeds in both directions at the same time would seem to require a negative energy-density, this feature is not necessary for a hyper-fast round trip — the outward and return paths can be separated in time (with the region having different gravitational properties during the outward and return trips) or in space (with different routes being taken for the outward and return journeys).

  • Caveats and Qualifications

Special relativity is designed around the assumption of Euclidean space and the stipulation that lightspeed is assumed to be isotropic, and neither of these assumptions is reliable for regions of spacetime that contain gravitational fields.

If we have a genuine lightspeed anisotropy that allows an object to move hyper-quickly between A and B, special relativity can respond by using the round-trip characteristics of light along the transit path to redefine the simultaneity of events at both locations, so that the “early” arrival event at B is redefined far enough into A’s future to guarantee a description in which the object is moving at less than cBACKGROUND.
This retrospective redefinition of times easily leads to definitional inconsistencies in warpdrive problems — If a package is sent from A to B and back to A again, and each journey is “ultrafast” thanks to a convenient gravitational gradient for each trip, one could invoke special relativity to declare that each individual trip has a speed less than cBACKGROUND, and then take the ultrafast arrival time of the package back at A as evidence that some form of reverse time travel has occurred , when in fact the apparent negative time component is an artifact of our repeated redefinition of the simultaneity of worldlines at A and B. Since it has been known for some time that similar definitional breakdowns in distant simultaneity can occur when an observer simply alters speed (the “one gee times one lightyear” limit quoted in MTW ), these breakdowns should not be taken too seriously when they reappear in more complex “warpdrive” problems.

Olum’s suggested method for defining simultaneity and hyperfast travel (calibration via signals sent through neighbouring regions of effectively-flat spacetime) is not easily applied to our earlier black hole example, because of the lack of a reference-path that bypasses the gravitational gradient (unless we take a reference-path previous to the formation of the black hole), but warpdrive scenarios tend instead to involve higher-order gravitational effects (e.g. gradients caused by so-called “non-Newtonian” forces ), and in these situations the concept of “relative height” in a gravitational field is often route-dependent (the concept “downhill” becomes a “local” rather than a “global” property, and gravitational rankings become intransitive). For this class of problem, Olum’s approach would seem to be the preferred method.

  • What’s the conclusion?

In order to be able to cross interstellar distances at enhanced speeds, we only require that the speed of light is greater in the direction in which we want to travel, in the region that we are travelling through, at the particular time that we are travelling through it. Although negative energy-densities would seem to be needed to increase the speed of light in both directions along the same path at the same time, this additional condition is only required for any hyperfast travel to be “obvious” to an observer at the origin point, which is a stronger condition than merely requiring that packages be delivered arbitrarily quickly. Hyperfast return journeys would also seem to be legal (along a pair of spatially separated or time-separated paths), as long as the associated energy-requirement is “paid for” somehow. Breakdowns in transitive logic and in the definitions used by special relativity already occur with some existing “legal” gravitational situations, and their reappearance in warpdrive problems is not in itself proof that these problems are paradoxical.

Arguments against negative energy-densities do not rule out paths that allow gravity-assisted travel at speeds greater than cBACKGROUND, provided that we are careful not to apply the conventions of special relativity inappropriately. Such paths do occur readily under general relativity, although it has to be admitted that some of the more extreme examples have a tendency to lead to unpleasant regions (such as the interiors of black holes) that one would not normally want to visit.

[Ref: Miguel Alcubierre, “The warp drive: hyper-fast travel within general relativity,”  Class. Quantum Grav. 11 L73-L77 (1994), Michael Spzir, “Spacetime hypersurfing,”  American Scientist 82 422-423 (Sept/Oct 1994), Robert L. Forward, “Guidelines to Antigravity,”  American Journal of Physics 31 (3) 166-170 (1963). ]

Negative Energy And Interstellar Travel

Can a region of space contain less than nothing? Common sense would say no; the most one could do is remove all matter and radiation and be left with vacuum. But quantum physics has a proven ability to confound intuition, and this case is no exception. A region of space, it turns out, can contain less than nothing. Its energy per unit volume–the energy density–can be less than zero.

Needless to say, the implications are bizarre. According to Einstein’s theory of gravity, general relativity, the presence of matter and energy warps the geometric fabric of space and time. What we perceive as gravity is the space-time distortion produced by normal, positive energy or mass. But when negative energy or mass–so-called exotic matter–bends space-time, all sorts of amazing phenomena might become possible: traversable wormholes, which could act as tunnels to otherwise distant parts of the universe; warp drive, which would allow for faster-than-light travel; and time machines, which might permit journeys into the past. Negative energy could even be used to make perpetual-motion machines or to destroy black holes. A Star Trek episode could not ask for more.

For physicists, these ramifications set off alarm bells. The potential paradoxes of backward time travel–such as killing your grandfather before your father is conceived–have long been explored in science fiction, and the other consequences of exotic matter are also problematic. They raise a question of fundamental importance: Do the laws of physics that permit negative energy place any limits on its behavior?

We and others have discovered that nature imposes stringent constraints on the magnitude and duration of negative energy, which (unfortunately, some would say) appear to render the construction of wormholes and warp drives very unlikely.

Double Negative

Before proceeding further, we should draw the reader’s attention to what negative energy is not.

It should not be confused with antimatter, which has positive energy. When an electron and its antiparticle, a positron, collide, they annihilate. The end products are gamma rays, which carry positive energy. If antiparticles were composed of negative energy, such an interaction would result in a final energy of zero.

One should also not confuse negative energy with the energy associated with the cosmological constant, postulated in inflationary models of the universe. Such a constant represents negative pressure but positive energy.

The concept of negative energy is not pure fantasy; some of its effects have even been produced in the laboratory. They arise from Heisenberg’s uncertainty principle, which requires that the energy density of any electric, magnetic or other field fluctuate randomly. Even when the energy density is zero on average, as in a vacuum, it fluctuates. Thus, the quantum vacuum can never remain empty in the classical sense of the term; it is a roiling sea of “virtual” particles spontaneously popping in and out of existence [see “Exploiting Zero-Point Energy,” by Philip Yam; SCIENTIFIC AMERICAN, December 1997]. In quantum theory, the usual notion of zero energy corresponds to the vacuum with all these fluctuations.

So if one can somehow contrive to dampen the undulations, the vacuum will have less energy than it normally does–that is, less than zero energy.[See, Casimir Starcraft: Zero Point Energy]

  • Negative Energy

Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut. The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.

One problem with Alcubierre’s original model that the interior of the warp bubble is causally disconnected from its forward edge. A starship captain on the inside cannot steer the bubble or turn it on or off; some external agency must set it up ahead of time. To get around this problem, Krasnikov proposed a “superluminal subway,” a tube of modified space-time (not the same as a wormhole) connecting Earth and a distant star. Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube. On the return journey, they could travel through it at warp speed. Like warp bubbles, the subway involves negative energy.

Negative energy is so strange that one might think it must violate some law of physics.

Before and after the creation of equal amounts of negative and positive energy in previously empty space, the total energy is zero, so the law of conservation of energy is obeyed. But there are many phenomena that conserve energy yet never occur in the real world. A broken glass does not reassemble itself, and heat does not spontaneously flow from a colder to a hotter body. Such effects are forbidden by the second law of thermodynamics.

This general principle states that the degree of disorder of a system–its entropy–cannot decrease on its own without an input of energy. Thus, a refrigerator, which pumps heat from its cold interior to the warmer outside room, requires an external power source. Similarly, the second law also forbids the complete conversion of heat into work.

Negative energy potentially conflicts with the second law. Imagine an exotic laser, which creates a steady outgoing beam of negative energy. Conservation of energy requires that a byproduct be a steady stream of positive energy. One could direct the negative energy beam off to some distant corner of the universe, while employing the positive energy to perform useful work. This seemingly inexhaustible energy supply could be used to make a perpetual-motion machine and thereby violate the second law. If the beam were directed at a glass of water, it could cool the water while using the extracted positive energy to power a small motor–providing a refrigerator with no need for external power. These problems arise not from the existence of negative energy per se but from the unrestricted separation of negative and positive energy.

Unfettered negative energy would also have profound consequences for black holes. When a black hole forms by the collapse of a dying star, general relativity predicts the formation of a singularity, a region where the gravitational field becomes infinitely strong. At this point, general relativity–and indeed all known laws of physics–are unable to say what happens next. This inability is a profound failure of the current mathematical description of nature. So long as the singularity is hidden within an event horizon, however, the damage is limited. The description of nature everywhere outside of the horizon is unaffected. For this reason, Roger Penrose of Oxford proposed the cosmic censorship hypothesis: there can be no naked singularities, which are unshielded by event horizons.

For special types of charged or rotating black holes– known as extreme black holes–even a small increase in charge or spin, or a decrease in mass, could in principle destroy the horizon and convert the hole into a naked singularity. Attempts to charge up or spin up these black holes using ordinary matter seem to fail for a variety of reasons. One might instead envision producing a decrease in mass by shining a beam of negative energy down the hole, without altering its charge or spin, thus subverting cosmic censorship. One might create such a beam, for example, using a moving mirror. In principle, it would require only a tiny amount of negative energy to produce a dramatic change in the state of an extreme black hole.

[Image Details: Pulses of negative energy are permitted by quantum theory but only under three conditions. First, the longer the pulse lasts, the weaker it must be (a, b). Second, a pulse of positive energy must follow. The magnitude of the positive pulse must exceed that of the initial negative one. Third, the longer the time interval between the two pulses, the larger the positive one must be – an effect known as quantum interest (c).]

Therefore, this might be the scenario in which negative energy is the most likely to produce macroscopic effects.

Fortunately (or not, depending on your point of view), although quantum theory allows the existence of negative energy, it also appears to place strong restrictions – known as quantum inequalities – on its magnitude and duration.The inequalities bear some resemblance to the uncertainty principle. They say that a beam of negative energy cannot be arbitrarily intense for an arbitrarily long time. The permissible magnitude of the negative energy is inversely related to its temporal or spatial extent. An intense pulse of negative energy can last for a short time; a weak pulse can last longer. Furthermore, an initial negative energy pulse must be followed by a larger pulse of positive energy.The larger the magnitude of the negative energy, the nearer must be its positive energy counterpart. These restrictions are independent of the details of how the negative energy is produced. One can think of negative energy as an energy loan. Just as a debt is negative money that has to be repaid, negative energy is an energy deficit.

In the Casimir effect, the negative energy density between the plates can persist indefinitely, but large negative energy densities require a very small plate separation. The magnitude of the negative energy density is inversely proportional to the fourth power of the plate separation. Just as a pulse with a very negative energy density is limited in time, very negative Casimir energy density must be confined between closely spaced plates. According to the quantum inequalities, the energy density in the gap can be made more negative than the Casimir value, but only temporarily. In effect, the more one tries to depress the energy density below the Casimir value, the shorter the time over which this situation can be maintained.

When applied to wormholes and warp drives, the quantum inequalities typically imply that such structures must either be limited to submicroscopic sizes, or if they are macroscopic the negative energy must be confined to incredibly thin bands. In 1996 we showed that a submicroscopic wormhole would have a throat radius of no more than about 10-32 meter. This is only slightly larger than the Planck length, 10-35 meter, the smallest distance that has definite meaning. We found that it is possible to have models of wormholes of macroscopic size but only at the price of confining the negative energy to an extremely thin band around the throat. For example, in one model a throat radius of 1 meter requires the negative energy to be a band no thicker than 10-21 meter, a millionth the size of a proton.

It is estimated that the negative energy required for this size of wormhole has a magnitude equivalent to the total energy generated by 10 billion stars in one year. The situation does not improve much for larger wormholes. For the same model, the maximum allowed thickness of the negative energy band is proportional to the cube root of the throat radius. Even if the throat radius is increased to a size of one light-year, the negative energy must still be confined to a region smaller than a proton radius, and the total amount required increases linearly with the throat size.

It seems that wormhole engineers face daunting problems. They must find a mechanism for confining large amounts of negative energy to extremely thin volumes. So-called cosmic strings, hypothesized in some cosmological theories, involve very large energy densities in long, narrow lines. But all known physically reasonable cosmic-string models have positive energy densities.

Warp drives are even more tightly constrained, as shown working with us. In Alcubierre’s model, a warp bubble traveling at 10 times lightspeed (warp factor 2, in the parlance of Star Trek: The Next Generation) must have a wall thickness of no more than 10-32 meter. A bubble large enough to enclose a starship 200 meters across would require a total amount of negative energy equal to 10 billion times the mass of the observable universe. Similar constraints apply to Krasnikov’s superluminal subway.

A modification of Alcubierre’s model was recently constructed by Chris Van Den Broeck of the Catholic University of Louvain in Belgium. It requires much less negative energy but places the starship in a curved space-time bottle whose neck is about 10-32 meter across, a difficult feat. These results would seem to make it rather unlikely that one could construct wormholes and warp drives using negative energy generated by quantum effects.

The quantum inequalities prevent violations of the second law. If one tries to use a pulse of negative energy to cool a hot object, it will be quickly followed by a larger pulse of positive energy, which reheats the object. A weak pulse of negative energy could remain separated from its positive counterpart for a longer time, but its effects would be indistinguishable from normal thermal fluctuations. Attempts to capture or split off negative energy from positive energy also appear to fail. One might intercept an energy beam, say, by using a box with a shutter. By closing the shutter, one might hope to trap a pulse of negative energy before the offsetting positive energy arrives. But the very act of closing the shutter creates an energy flux that cancels out the negative energy it was designed to trap.

A pulse of negative energy injected into a charged black hole might momentarily destroy the horizon, exposing the singularity within. But the pulse must be followed by a pulse of positive energy, which would convert the naked singularity back into a black hole – a scenario we have dubbed cosmic flashing. The best chance to observe cosmic flashing would be to maximize the time separation between the negative and positive energy, allowing the naked singularity to last as long as possible. But then the magnitude of the negative energy pulse would have to be very small, according to the quantum inequalities. The change in the mass of the black hole caused by the negative energy pulse will get washed out by the normal quantum fluctuations in the hole’s mass, which are a natural consequence of the uncertainty principle. The view of the naked singularity would thus be blurred, so a distant observer could not unambiguously verify that cosmic censorship had been violated.

Recently it was shown that the quantum inequalities lead to even stronger bounds on negative energy. The positive pulse that necessarily follows an initial negative pulse must do more than compensate for the negative pulse; it must overcompensate. The amount of overcompensation increases with the time interval between the pulses. Therefore, the negative and positive pulses can never be made to exactly cancel each other. The positive energy must always dominate–an effect known as quantum interest. If negative energy is thought of as an energy loan, the loan must be repaid with interest. The longer the loan period or the larger the loan amount, the greater is the interest. Furthermore, the larger the loan, the smaller is the maximum allowed loan period. Nature is a shrewd banker and always calls in its debts. The concept of negative energy touches on many areas of physics: gravitation, quantum theory, thermodynamics. The interweaving of so many different parts of physics illustrates the tight logical structure of the laws of nature. On the one hand, negative energy seems to be required to reconcile black holes with thermodynamics. On the other, quantum physics prevents unrestricted production of negative energy, which would violate the second law of thermodynamics. Whether these restrictions are also features of some deeper underlying theory, such as quantum gravity, remains to be seen. Nature no doubt has more surprises in store.

Key To Space Time Engineering: Huge Magnetic Field Created

Large magnetic fields above 150T were never produced before but this time scientists has successfully created the magnetic Field of 300T. Larger magnetic fields could be implemented into space time engineering, however this is not up to extent we need yet, provide a key to future space time engineering. Graphene, the extraordinary form of carbon that consists of a single layer of carbon atoms, has produced another in a long list of experimental surprises. In the current issue of the journal Science, a multi-institutional team of researchers headed by Michael Crommie, a faculty senior scientist in the Materials Sciences Division at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, reports the creation of pseudo-magnetic fields far stronger than the strongest magnetic fields ever sustained in a laboratory – just by putting the right kind of strain onto a patch of graphene.

“We have shown experimentally that when graphene is stretched to form nanobubbles on a platinum substrate, electrons behave as if they were subject to magnetic fields in excess of 300 tesla, even though no magnetic field has actually been applied,” says Crommie. “This is a completely new physical effect that has no counterpart in any other condensed matter system.”

Crommie notes that “for over 100 years people have been sticking materials into magnetic fields to see how the electrons behave, but it’s impossible to sustain tremendously strong magnetic fields in a laboratory setting.” The current record is 85 tesla for a field that lasts only thousandths of a second. When stronger fields are created, the magnets blow themselves apart.

The ability to make electrons behave as if they were in magnetic fields of 300 tesla or more – just by stretching graphene – offers a new window on a source of important applications and fundamental scientific discoveries going back over a century. This is made possible by graphene’s electronic behavior, which is unlike any other material’s.

[Image Details: In this scanning tunneling microscopy image of a graphene nanobubble, the hexagonal two-dimensional graphene crystal is seen distorted and stretched along three main axes. The strain creates pseudo-magnetic fields far stronger than any magnetic field ever produced in the laboratory. ]

A carbon atom has four valence electrons; in graphene (and in graphite, a stack of graphene layers), three electrons bond in a plane with their neighbors to form a strong hexagonal pattern, like chicken-wire. The fourth electron sticks up out of the plane and is free to hop from one atom to the next. The latter pi-bond electrons act as if they have no mass at all, like photons. They can move at almost one percent of the speed of light.

The idea that a deformation of graphene might lead to the appearance of a pseudo-magnetic field first arose even before graphene sheets had been isolated, in the context of carbon nanotubes (which are simply rolled-up graphene). In early 2010, theorist Francisco Guinea of the Institute of Materials Science of Madrid and his colleagues developed these ideas and predicted that if graphene could be stretched along its three main crystallographic directions, it would effectively act as though it were placed in a uniform magnetic field. This is because strain changes the bond lengths between atoms and affects the way electrons move between them. The pseudo-magnetic field would reveal itself through its effects on electron orbits.

In classical physics, electrons in a magnetic field travel in circles called cyclotron orbits. These were named following Ernest Lawrence’s invention of the cyclotron, because cyclotrons continuously accelerate charged particles (protons, in Lawrence’s case) in a curving path induced by a strong field.

Viewed quantum mechanically, however, cyclotron orbits become quantized and exhibit discrete energy levels. Called Landau levels, these correspond to energies where constructive interference occurs in an orbiting electron’s quantum wave function. The number of electrons occupying each Landau level depends on the strength of the field – the stronger the field, the more energy spacing between Landau levels, and the denser the electron states become at each level – which is a key feature of the predicted pseudo-magnetic fields in graphene.

A serendipitous discovery

In the patch of graphene inside the roughly circular indentation on a platinum substrate, four triangular nanobubbles appear at the edge of the patch and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows electron densities clustering at discrete Landau levels. Pseudo-magnetic fields are strongest at regions of greatest curvature.

[Image Details: A patch of graphene at the surface of a platinum substrate exhibits four triangular nanobubbles at its edges and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows local electron densities clustering in peaks at discrete Landau-level energies. Pseudo-magnetic fields are strongest at regions of greatest curvature.]

Describing their experimental discovery, Crommie says, “We had the benefit of a remarkable stroke of serendipity.”

Crommie’s research group had been using a scanning tunneling microscope to study graphene monolayers grown on a platinum substrate. A scanning tunneling microscope works by using a sharp needle probe that skims along the surface of a material to measure minute changes in electrical current, revealing the density of electron states at each point in the scan while building an image of the surface.

Crommie was meeting with a visiting theorist from Boston University, Antonio Castro Neto, about a completely different topic when a group member came into his office with the latest data. It showed nanobubbles, little pyramid-like protrusions, in a patch of graphene on the platinum surface and associated with the graphene nanobubbles there were distinct peaks in the density of electron states. Crommie says his visitor, Castro Neto, took one look and said, “That looks like the Landau levels predicted for strained graphene.”

Sure enough, close examination of the triangular bubbles revealed that their chicken-wire lattice had been stretched precisely along the three axes needed to induce the strain orientation that Guinea and his coworkers had predicted would give rise to pseudo-magnetic fields. The greater the curvature of the bubbles, the greater the strain, and the greater the strength of the pseudo-magnetic field. The increased density of electron states revealed by scanning tunneling spectroscopy corresponded to Landau levels, in some cases indicating giant pseudo-magnetic fields of 300 tesla or more.

“Getting the right strain resulted from a combination of factors,” Crommie says. “To grow graphene on the platinum we had exposed the platinum to ethylene” – a simple compound of carbon and hydrogen – “and at high temperature the carbon atoms formed a sheet of graphene whose orientation was determined by the platinum’s lattice structure.”

The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), the colors indicate height, not field strength, but the measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model. To get the highest resolution from the scanning tunneling microscope, the system was then cooled to a few degrees above absolute zero. Both the graphene and the platinum contracted – but the platinum shrank more, with the result that excess graphene pushed up into bubbles, measuring four to 10 nanometers (billionths of a meter) across and from a third to more than two nanometers high. To confirm that the experimental observations were consistent with theoretical predictions, Castro Neto worked with Guinea to model a nanobubble typical of those found by the Crommie group. The resulting theoretical picture was a near-match to what the experimenters had observed: a strain-induced pseudo-magnetic field some 200 to 400 tesla strong in the regions of greatest strain, for nanobubbles of the correct size.

[Image Details: The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), colors indicate height, not field strength, but measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model.]

“Controlling where electrons live and how they move is an essential feature of all electronic devices,” says Crommie. “New types of control allow us to create new devices, and so our demonstration of strain engineering in graphene provides an entirely new way for mechanically controlling electronic structure in graphene. The effect is so strong that we could do it at room temperature.”

The opportunities for basic science with strain engineering are also huge. For example, in strong pseudo-magnetic fields electrons orbit in tight circles that bump up against one another, potentially leading to novel electron-electron interactions. Says Crommie, “this is the kind of physics that physicists love to explore.”

“Strain-induced pseudo-magnetic fields greater than 300 tesla in graphene nanobubbles,” by Niv Levy, Sarah Burke, Kacey Meaker, Melissa Panlasigui, Alex Zettl, Francisco Guinea, Antonio Castro Neto, and Michael Crommie, appears in the July 30 issue of Science. The work was supported by the Department of Energy’s Office of Science and by the Office of Naval Research. I’ve contacted Crommy to provide more details of research. Hope, soon I’ll get a response from him,

[Source: News Center]