Superluminal Speed in Gases..!!

Scientists have apparently broken the universe’s speed limit. For generations, physicists believed there is nothing faster than light moving through a vacuum – a speed of 186,000 miles per second. But in an experiment in Princeton, N.J., physicists sent a pulse of laser light through cesium vapor so quickly that it left the chamber before it had even finished entering. The pulse traveled 310 times the distance it would have covered if the chamber had contained a vacuum.

This seems to contradict not only common sense, but also a bedrock principle of Albert Einstein’s theory of relativity, which sets the speed of light in a vacuum, about 186,000 miles per second, as the fastest that anything can go. But the findings–the long-awaited first clear evidence of faster-than-light motion–are “not at odds with Einstein,” said Lijun Wang, who with colleagues at the NEC Research Institute in Princeton, N.J., report their results in today’s issue of the journal Nature.

“However,” Wang said, “our experiment does show that the generally held misconception that ‘nothing can move faster than the speed of light’ is wrong.” Nothing with mass can exceed the light-speed limit. But physicists now believe that a pulse of light–which is a group of massless individual waves–can.

To demonstrate that, the researchers created a carefully doctored vapor of laser-irradiated atoms that twist, squeeze and ultimately boost the speed of light waves in such abnormal ways that a pulse shoots through the vapor in about 1/300th the time it would take the pulse to go the same distance in a vacuum.

As a general rule, light travels more slowly in any medium more dense than a vacuum (which, by definition, has no density at all). For example, in water, light travels at about three-fourths its vacuum speed; in glass, it’s around two-thirds. The ratio between the speed of light in a vacuum and its speed in a material is called the refractive index. The index can be changed slightly by altering the chemical or physical structure of the medium. Ordinary glass has a refractive index around 1.5. But by adding a bit of lead, it rises to 1.6. The slower speed, and greater bending, of light waves accounts for the more sprightly sparkle of lead crystal glass.

The NEC researchers achieved the opposite effect, creating a gaseous medium that, when manipulated with lasers, exhibits a sudden and precipitous drop in refractive index, Wang said, speeding up the passage of a pulse of light. The team used a 2.5-inch-long chamber filled with a vapor of cesium, a metallic element with a goldish color. They then trained several laser beams on the atoms, putting them in a stable but highly unnatural state.

In that condition, a pulse of light or “wave packet” (a cluster made up of many separate interconnected waves of different frequencies) is drastically reconfigured as it passes through the vapor. Some of the component waves are stretched out, others compressed. Yet at the end of the chamber, they recombine and reinforce one another to form exactly the same shape as the original pulse, Wang said. “It’s called re-phasing.”

The key finding is that the reconstituted pulse re-forms before the original intact pulse could have gotten there by simply traveling though empty space. That is, the peak of the pulse is, in effect, extended forward in time. As a result, detectors attached to the beginning and end of the vapor chamber show that the peak of the exiting pulse leaves the chamber about 62 billionths of a second before the peak of the initial pulse finishes going in.That is not the way things usually work. Ordinarily, when sunlight–which, like the pulse in the experiment, is a combination of many different frequencies–passes through a glass prism, the prism disperses the white light’s components.

Illustration of wavefronts in the context of S...

Image via Wikipedia

This happens because each frequency moves at a different speed in glass, smearing out the original light beam. Blue is slowed the most, and thus deflected the farthest; red travels fastest and is bent the least. That phenomenon produces the familiar rainbow spectrum.

But the NEC team’s laser-zapped cesium vapor produces the opposite outcome. It bends red more than blue in a process called “anomalous dispersion,” causing an unusual reshuffling of the relationships among the various component light waves. That’s what causes the accelerated re-formation of the pulse, and hence the speed-up. In theory, the work might eventually lead to dramatic improvements in optical transmission rates. “There’s a lot of excitement in the field now,” said Steinberg. “People didn’t get into this area for the applications, but we all certainly hope that some applications can come out of it. It’s a gamble, and we just wait and see.”

[Source: Time Travel Research Centre]

Can the Vacuum Be Engineered for Space Flight Applications? Overview of Theory and Experiments

A Feynman diagram showing the radiation of a g...

Image via Wikipedia

By H. E. Puthoff

Quantum theory predicts, and experiments verify, that empty space (the vacuum) contains an enormous residual background energy known as zero-point energy (ZPE). Originally thought to be of significance only for such esoteric concerns as small perturbations to atomic emission processes, it is now known to play a role in large-scale phenomena of interest to technologists as well, such as the inhibition of spontaneous emission, the generation of short-range attractive forces (e.g., the Casimir force), and the possibility of accounting for sonoluminescence phenomena. ZPE topics of interest for spaceflight applications range from fundamental issues (where does inertia come from, can it be controlled?), through laboratory attempts toextract useful energy from vacuum fluctuations (can the ZPE be “mined” for practical use?), to scientifically grounded extrapolations concerning “engineering the vacuum” (is “warp-drive” space propulsion a scientific possibility?). Recent advances in research into the physics of the underlying  ZPE indicate the possibility of potential application in all these areas of interest.

The concept “engineering the vacuum” was first introduced by Nobel Laureate T. D. Lee  in his book Particle Physics and Introduction to Field Theory. As stated in Lee’s book: ” The experimental method to alter the properties of the vacuum may be called vacuum engineering…. If  indeed we are able to alter the vacuum, then we may encounter some new phenomena, totally unexpected.” Recent experiments have indeed shown this to be the case.

With regard  to space propulsion,  the question of engineering the vacuum can be put succinctly: ” Can empty space itself provide the solution?” Surprisingly enough, there are hints that potential help may in fact emerge quite literally out of the vacuum of so-called ” empty space.” Quantum theory  tells us that empty space is not truly empty, but rather is the seat of myriad energetic quantum processes  that  could have profound  implications  for  future  space travel. To understand these  implications it will serve us to review briefly the historical development of the scientific view of what constitutes empty space.

At the time of the Greek philosophers, Democritus argued that empty space was truly a void, otherwise there would not be room for the motion of atoms. Aristotle, on the other hand, argued equally forcefully that what appeared to be empty space was in fact a plenum (a background filled with substance), for did not heat and light travel from place to place as if carried by some kind of medium? The argument went back and forth through the centuries until finally codified by Maxwell’s  theory of  the  luminiferous  ether, a plenum  that  carried electromagnetic waves, including light, much as water carries waves across its surface. Attempts  to measure  the properties of  this ether, or to measure the Earth’s velocity through the ether (as in the Michelson-Morley experiment), however, met with failure. With the rise of special relativity which did not require reference to such an underlying substrate, Einstein in 1905 effectively banished the ether in favor of the concept that empty space constitutes a true void. Ten years later, however, Einstein’s own development of the general theory of relativity with its  concept of curved space and distorted geometry forced him to reverse his stand and opt for a richly-endowed plenum, under the new label spacetime metric.

It was the advent of modern quantum theory, however, that established the quantum vacuum, so-called empty space, as a very active place, with particlesarising and disappearing, a virtual plasma, and fields continuously fluctuatingabout their zero baseline values. The energy associated with such processes iscalled zero-point energy (ZPE), reflecting the fact that such activity remains even at absolute zero.

The Vacuum As A Potential Energy Source

At its most fundamental level, we now recognize that the quantum vacuum is an enormous reservoir of untapped energy, with energy densities conservatively estimated by Feynman and Hibbs to be on the order of nuclear energy densities or greater. Therefore, the question is, can the ZPE be “mined” for practical use? If so, it would constitute a virtually ubiquitous energy supply, a veritable “Holy Grail” energy source for space propulsion.

As utopian as such a possibility may seem, physicist Robert Forward at Hughes Research Laboratories demonstrated proof-of-principle in apaper,  “Extracting Electrical Energy from the Vacuum by Cohesion of Charged Foliated Conductors.” Forward’s approach exploited a phenomenon called the Casimir Effect, an attractive quantum force between closely-spaced metal plates, named for its discoverer, H. G. B. Casimir of Philips Laboratories in the Netherlands. The Casimir force, recently measured with high accuracy by S. K. Lamoreaux at the University of Washington, derives from partial shielding of  the interior region of the plates from the background zero-point fluctuations of the vacuum electromagnetic field. As shown by Los Alamos theorists Milonni  et  al., this shielding results in the plates being pushed together by the unbalanced ZPE radiation pressures. The result is a corollary conversion of vacuum energy to some other  form such as heat. Proof  that such a process violates neither energy nor  thermodynamic constraints can be found in a paper by a colleague and myself (Cole & Puthoff ) under the title “Extracting Energy and Heat from the Vacuum.”

Attempts to harness the Casimir and related effects for vacuum energy conversion are ongoing in our laboratory and elsewhere. The fact that its potential application to space propulsion has not gone unnoticed by the Air Force can be seen in its request for proposals for the FY-1986 Defense  SBIR Program. Under entry AF86-77, Air Force Rocket Propulsion  Laboratory (AFRPL), Topic: Non-Conventional Propulsion Concepts we f ind the statement: ” Bold,new non-conventional propulsion concepts are solicited…. The specific area sin which AFRPL is interested include…. (6) Esoteric  energy  sources  for propulsion  including  the zero point quantum dynamic energy of vacuum space.”

Several experimental formats for tapping the ZPE for practical use are under investigation in our laboratory. An early one of interest is based on the idea of a Casimir pinch effect in non-neutral plasmas, basically a plasma equivalent of Forward’s electromechanical charged-plate collapse. The underlying physics is described in a paper submitted for publication by myself and a colleague, and it is illustrative that the first of several patents issued to a consultant to our laboratory, K. R. Shoulders(1991), contains the descriptive phrase ” …energy  is provided… and the ultimate source of this energy appears to be the zero-point radiation of the vacuum continuum.” Another intriguing possibility is provided by the phenomenon of sonoluminescence, bubble collapse in an ultrasonically-driven fluid which is accompanied by intense, sub-nanosecond light radiation. Although the jury is still out as to the mechanism of light generation, Nobelist Julian Schwinger (1993) has argued for a Casimir interpretation. Possibly related experimental evidence for excess heat generation in ultrasonically-driven cavitation in heavy water is claimed in an EPRI Report by E-Quest Sciences ,although attributed to a nuclear micro-fusion process. Work is under way in our laboratory to see if this claim can be replicated.

Yet another proposal for ZPE extraction is described in a recent patent(Mead & Nachamkin, 1996). The approach proposes  the use of resonant dielectric spheres, slightly detuned from each other, to provide a beat-frequency downshift of the more energetic high-frequency components of the ZPE to a more easily captured form. We are discussing the possibility of a collaborative effort between us to determine whether such an approach is feasible. Finally, an approach utilizing micro-cavity techniques to perturb the ground state stability of atomic hydrogen is under consideration in our lab. It is based on a paper of mine (Puthoff, 1987) in which I put forth the hypothesis that then on radiative nature of the ground state  is due to a dynamic equilibrium in which radiation emitted due to accelerated electron ground state motion is compensated by absorption from the ZPE. If this hypothesis is true, there exists the potential for energy generation by the application of the techniques of so-called cavity quantum electrodynamics(QED). In cavity QED, excited atoms are passed through Casimir-like cavities whose structure suppresses electromagnetic cavity modes at the transition frequency between the atom’s excited and ground states. The result is that the so-called “spontaneous” emission time is lengthened considerably (for example, by factors of ten), simply because spontaneous emission is not so spontaneous after all, but rather is driven by vacuum fluctuations. Eliminate the modes, and you eliminate the zero point fluctuations of the modes, hence suppressing decay of the excited state. As stated in a review article on cavity QED in Scientific American, “An excited atom that would ordinarily emit a low-frequency photon can not do so, because there are no vacuum fluctuations to stimulate its emission….In its application to energy generation, mode suppression would be used to perturb the hypothesized dynamic ground state absorption/emission balance to lead to energy release.

An example in which Nature herself may have taken advantage of energetic vacuum effects is discussed in a model published by ZPE colleagues A. Rueda of California State University at Long Beach, B. Haisch of Lockheed-Martin,and D. Cole of IBM (1995). In a paper published in the Astrophysical Journal,they propose that the vast reaches of outer space constitute an ideal environment for ZPE acceleration of nuclei and thus provide a mechanism for “powering up” cosmic rays. Details of the model would appear to account for other observed phenomena as well, such as the formation of cosmic voids. This raises the possibility of utilizing a ” sub-cosmic-ray” approach to accelerate protons in a cryogenically-cooled, collision-free vacuum trap and thus extract energy from the vacuum fluctuations by this mechanism.

The Vacuum as the Source of Gravity and Inertia

What of the fundamental forces of gravity and inertia that we seek to overcome in space travel? We have phenomenological theories that describe their effects(Newton’s Laws and their relativistic generalizations), but what of their origins?

The first hint that these phenomena might themselves be traceable to roots in the underlying fluctuations of the vacuum came in a study published by the well-known Russian physicist Andrei Sakharov. Searching to derive Einstein’s phenomenological equations for general relativity from a more fundamental set of assumptions, Sakharov came to the conclusion that the entire panoply of general relativistic phenomena could be seen as induced effects brought about by changes  in the quantum-fluctuation energy of the vacuum due to the presence of matter. In this view the attractive gravitational force is more akin to the induced Casimir force discussed above, than to the fundamental inverse square law Coulomb force between charged particles with which it is often compared. Although speculative when first introduced by Sakharov this hypothesis has led to a rich and ongoing literature, including contributions of my own on quantum-fluctuation-induced gravity, aliterature that continues to yield deep insight into the role played by vacuum forces.

Given an apparent deep connection between gravity and the zero-point fluctuations of the vacuum, a similar connection must exist between these self same vacuum fluctuations and inertia. This is because it is an empirical fact that the gravitational and inertial masses have the same value, even though the underlying phenomena are quite disparate. Why, for example, should a measure of the resistance of a body to being accelerated, even if far from any gravitational  field, have the same value that  is associated with the gravitational attraction between bodies? Indeed, if one is determined by vacuum fluctuations, so must the other. To get to the heart of inertia, consider a specific example in which you are standing on a train in the station. As the train leaves the platform with a jolt, you could be thrown to the  floor. What  is  this force  that knocks you down,seemingly coming out of nowhere? This phenomenon, which we conveniently label inertia and go on about our physics, is a subtle feature of the universe that has perplexed generations of physicists from Newton to Einstein. Since in this example the sudden disquieting imbalance results from acceleration “relative to the fixed stars,” in its most provocative  form one could say that it was the “stars” that delivered the punch. This key feature was emphasized by the Austrian philosopher of science Ernst Mach, and is now known as Mach’s Principle. Nonetheless, the mechanism by which the stars might do this deed has eluded convincing explication.

Addressing this issue in a paper entitled “Inertia as a Zero-Point Field Lorentz Force,” my colleagues and I (Haisch, Rueda & Puthoff, 1994) were successful in tracing the problem of inertia and its connection to Mach’s Principle to the ZPE properties of the vacuum. In a sentence, although a uniformly moving body does not experience a drag  force from  the  (Lorentz-invariant)vacuum fluctuations, an accelerated body meets a resistance (force) proportional to the acceleration. By accelerated we mean, of course, accelerated relative to the fixed stars. It turns out that an argument can be made that the quantum fluctuations of distant matter structure the local vacuum-fluctuation frame of reference. Thus, in the example of the train the punch was delivered by the wall of vacuum fluctuations acting as a proxy for the fixed stars through which one attempted to accelerate.

The implication for space travel is this: Given the evidence generated in the field of cavity QED (discussed above), there is experimental evidence that vacuum  fluctuations can be altered by technological means. This leads to the corollary that, in principle, gravitational and inertial masses can also be altered. The possibility of altering mass with a view to easing the energy burden of future spaceships has been seriously considered by the Advanced Concepts Office of the Propulsion Directorate of the Phillips Laboratory at Edwards AirForce Base. Gravity researcher Robert Forward accepted an assignment to review this concept. His deliverable product was to recommend a broad, multipronged ef fort involving laboratories from around the world to investigate the inertia model experimentally. The Abstract reads in part:

Many researchers see the vacuum as a central ingredient of 21st-Century physics ….Some even believe the vacuum may be harnessed to provide a limitless supply of energy. This report summarizes an attempt to find an experiment that would test the Haisch,Rueda and Puthoff (HRP) conjecture that the mass and inertia of a body are induced effects brought about by changes in the quantum-fluctuation energy of the vacuum…. It was possible to find an experiment that might be able to prove or disprove that the inertial mass of a body can be altered by making changes in the vacuum surrounding the body.

With regard to action items, Forward in fact recommends a ranked list of not one but four experiments to be carried out to address the ZPE-inertia conceptand its broad implications. The recommendations included investigation of the proposed “sub-cosmic-ray energy device” mentioned earlier, and the investigation of an hypothesized “inertia-wind” effect proposed by our laboratory and possibly detected in early experimental work,though the latter possibility is highly speculative at this point.

Engineering the Vacuum For “Warp Drive”

Perhaps one of the most speculative, but nonetheless scientifically-grounded, proposals of all is the so-called Alcubierre Warp Drive. Taking on the challenge of determining whether Warp Drive a là Star Trek was a scientific possibility, general relativity theorist Miguel Alcubierre of the University of  Wales set himself  the task of determining whether faster-than light travel was possible within the constraints of standard theory. Although such clearly could not be the case in the flat space of special relativity, general relativity permits consideration of altered space time metrics where such a possibility is not a priori ruled out. Alcubierre ’s further self-imposed constraints on an acceptable solution included the requirements that no net time distortion should occur (breakfast on Earth, lunch on Alpha Centauri, and home for dinner with your wife and children, not your great-great-great grandchildren), and that the occupants of the spaceship were not to be flattened against the bulkhead by unconscionable accelerations.

A solution meeting all of the above requirements was found and published by Alcubierre in Classical and Quantum Gravity in 1994. The solution discovered by Alcubierre involved the creation of a local distortion of space time such that space time is expanded behind the spaceship, contracted ahead of it, and yields a hypersurfer-like motion faster than the speed of light as seen by observers outside the disturbed region. In essence, on the outgoing leg of its journey the spaceship is pushed away from Earth and pulled towards its distant destination by the engineered local expansion of space time itself. For followup on the broader aspects of “metric engineering” concepts, one can refer to apaper published  by myself  in Physics Essays (Puthoff,  1996). Interestingly enough, the engineering requirements rely on the generation of macroscopic,negative-energy-density, Casimir-like states in the quantum vacuum of the type discussed earlier. Unfortunately,meeting such requirements is beyond technological reach without some unforeseen breakthrough.

Related, of course, is the knowledge that general relativity permits the possibility of wormholes, topological tunnels which in principle could connect distant parts of the universe, a cosmic subway so to speak. Publishing in the American Journal of Physics, theorists Morris and Thorne initially outlined in some detail the requirements for traversible wormholes and have found that, in principle, the possibility exists provided one has access to Casimir-like, negative-energy-density quantum vacuum states. This has led to a rich literature, summarized recently in a book by Matt Visser of Washington University. Again, the technological requirements appear out of reach for the foreseeable future, perhaps awaiting new techniques for cohering the ZPE vacuum fluctuations in order to meet the energy-density requirements.

Where does this leave us? As we peer into the heavens from the depth of our gravity well, hoping for some “magic” solution that will launch our spacefarers first to the planets and then to the stars, we are reminded of Arthur C. Clarke’s phrase that highly-advanced technology is essentially indistinguishable from magic. Fortunately, such magic appears to be waiting in the wings of our deepening understanding of the quantum vacuum in which we live.

[Ad: Can the Vacuum Be Engineered for Space Flight Applications? Overview of  Theory and Experiments By H. E. Puthoff(PDF)]

Negative Energy: From Theory to Lab

Views of spacetime along the world line of a r...

Image via Wikipedia

Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut. The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.
One problem with Alcubierre’s original model that the interior of the warp bubble is causally disconnected from its forward edge. A starship captain on the inside cannot steer the bubble or turn it on or off; some external agency must set it up ahead of time. To get around this problem, Krasnikov proposed a “superluminal subway,” a tube of modified space-time (not the same as a wormhole) connecting Earth and a distant star. Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube. On the return journey, they could travel through it at warp speed. Like warp bubbles, the subway involves negative energy.

Almost every faster than light travel requires negative energies to be implemented at a very large densities. Today I came across a paper by E. W. Devis in which he described various experimental conditions under which negative energy can be generated in lab.

Examples of Exotic or “Negative” Energy

The exotic (energy condition-violating) mass-energy fields that are known to occur in nature are:� Radial electric or magnetic fields.

  • Radial electric or magnetic fields. These are borderline exotic, if their tension were infinitesimally larger,for a given energy density.�
  • Squeezed quantum states of the electromagnetic field and other squeezed quantum fields;
  • Gravitationally squeezed vacuum electromagnetic zero-point energy.
  • Other quantum fields/states/effects. In general, the local energy density in quantum field theory can benegative due to quantum coherence effects. Other examples that have been studied are Dirac field states: the superposition of two single particle electron states and thesuperposition of two multi-electron-positron states. In the former(latter), the energy densities can be negative when two single (multi-) particle states have the same number of electrons (electrons and positrons) or when one state has one more electron (electron-positron pair) than the other.

Since the laws of quantum field theory place no strong restrictions on negative energies and fluxes, then it might bepossible to produce gross macroscopic effects such as warp drive, traversable wormholes, violation of the second law of thermodynamics, and time machines. The above examples are representative forms of mass-energy that possess negativeenergy density or are borderline exotic.

Generating Negative Energy in Lab

Davis and Puthoff described various experiment to generate and detect negative energy in lab. Some of them are as:

1. Squeezed Quantum States: Substantial theoretical and experimental work has shown that in many quantum systems the limits to measurementprecision imposed by the quantum vacuum zero-point fluctuations (ZPF) can be breached by decreasing the noise inone observable (or measurable quantity) at the expense of increasing the noise in the conjugate observable; at thesame time the variations in the first observable, say the energy, are reduced below the ZPF such that the energybecomes “negative.” “Squeezing” is thus the control of quantum fluctuations and corresponding uncertainties,whereby one can squeeze the variance of one (physically important) observable quantity provided the variance in the(physically unimportant) conjugate variable is stretched/increased. The squeezed quantity possesses an unusuallylow variance, meaning less variance than would be expected on the basis of the equipartition theorem. One can inprinciple exploit quantum squeezing to extract energy from one place in the ordinary vacuum at the expense ofaccumulating excess energy elsewhere.

2. Gravitationally Squeezed Electromagnetic ZPF: A natural source of negative energy comes from the effect that gravitational fields (of astronomical bodies) in space have upon the surrounding vacuum. For example, the gravitational field of the Earth produces a zone of negative energy around it by dragging some of the virtual particle pairs (a.k.a. vacuum ZPF) downward. One can utilize the negative vacuum energy densities, which arise from distortion of the electromagnetic zero point fluctuations due to the interaction with a prescribed gravitational background, for providing a violation of the energy conditions. The squeezed quantum states of quantum optics provide a natural form of matter having negative energy density. The analysis, via quantum optics, shows that gravitation itself provides the mechanism for generating the squeezed vacuum states needed to support stable traversable wormholes. The production of negative energy densities via a squeezed vacuum is a necessary and unavoidable consequence of the interaction or coupling between ordinary matter and gravity, and this defines what is meant by gravitationally squeezed vacuum states.

The general result of the gravitational squeezing effect is that as the gravitational field strength increases, thenegative energy zone (surrounding the mass) also increases in strength. Table  shows when gravitational squeezing becomes important for example masses. The table shows that in the case of the Earth, Jupiter and the Sun, this squeeze effect is extremely feeble because only ZPF mode wavelengths above 0.2 m – 78 km are affected. For asolar mass black hole (radius of 2.95 km), the effect is still feeble because only ZPF mode wavelengths above 78 kmare affected. But also note in the table that Planck mass objects will have an enormously strong negative energyzone surrounding them because all ZPF mode wavelengths above 8.50* 10^-34 meters will be squeezed, in otherwords, all wavelengths of interest for vacuum fluctuations. Protons will have the strongest negative energy zone incomparison because the squeezing effect includes all ZPF mode wavelengths above 6.50* 10^-53 meters. Furthermore, a body smaller than a nuclear diameter (≈ 10^-16 m) and containing the mass of a mountain (≈10^11 kg)has a fairly strong negative energy zone because all ZPF mode wavelengths above 10􀀐15 meters will be squeezed.

We are presently unaware of any way to artificially generate gravitational squeezing of the vacuum in the laboratory. This will be left for future investigation. Aliens may help us!!

3. A Moving Mirror: Negative energy can be created by a single moving reflecting surface (a moving mirror). A mirror moving with increasing acceleration generates a flux of negative energy that emanates from its surface and flows out into the space ahead of the mirror. However, this effect is known to beexceedingly small, and it is not the most effective way to generate negative energy for our purposes.

4.Radial Electric/Magnetic Fields: It is beyond the scope to include all the technical configurations by which one can generate radialelectric or magnetic fields. Suffice it to say that ultrahigh-intensity table top lasers have been used to generate extreme electric and magnetic field strengths in the lab. Ultrahigh-intensity lasers use the chirped-pulse amplification (CPA) technique to boost the total output beam power. All laser systems simply repackage energy asa coherent package of optical power, but CPA lasers repackage the laser pulse itself during the amplification process. In typical high-power short-pulse laser systems, it is the peak intensity, not the energy or the fluence,which causes pulse distortion or laser damage. However, the CPA laser dissects a laser pulse according to itsfrequency components, and reorders it into a time-stretched lower-peak-intensity pulse of the same energy. This benign pulse can then be amplified safely to high energy, andthen only afterwards reconstituted as a very short pulse of enormous peak power – a pulse which could never itselfhave passed safely through the laser system (see Figure 2). Made more tractable in this way, the pulse can be amplified to substantial energies (with orders of magnitude greater peak power) without encountering intensity related problems.

The extreme output beam power, fields and physical conditions that have been achieved by ultrahigh-intensitytabletop lasers are:

  • Power Intensity 10^15 – 10^26 Watts/cm2 (10^30 W/cm2 using SLAC as a booster)
  • Peak Power Pulse ≤10^3 femtoseconds
  • E-fields ≈10^14 – 10^18 V/m [note: the critical quantum-electrodynamical (vacuum breakdown) field strengthis Ec = 2me^2c^3/he =10^18 V/m; me and e are the electron mass and charge]
  • B-fields ≈ several ≈10^6 Tesla [note: the critical quantum-electrodynamical (vacuum breakdown) field strength is Bc = Ec/c =10^10 Tesla]
  • Ponderomotive Acceleration of  Electrons ≈10^17 – 10^30 g’s (1 g = 9.81 m/sec2)
  • Light Pressure ≈10^9 – 10^15 bars
  • Plasma Temperatures > 10^10 K

Ultrahigh-intensity lasers can generate an electric field energy density of ~ 10^16 –10^28 J/m^3 and a magnetic field energy density of ~ 10^19 J/m^3. These energy densities are about the right order ofmagnitude to explore generating kilometer to AU sized wormholes. But that would be difficult to engineer on Earth. However, these energy densities are well above what would be required to explore the generation of micro wormholes in the lab.

5. Negative Energy from Squeezed Light: Negative energy can be generated by an array of ultrahigh intensity (femtosecond) lasers using an ultrafast rotatingmirror system. In this scheme  a laser beam is passed through an optical cavity resonator made of  lithium niobate (LiNbO3) crystal that is shaped like a cylinder with rounded silvered ends toreflect light. The resonator will act to produce a secondary lower frequency light beam in which the pattern ofphotons is rearranged into pairs. This is the quantum optical squeezing of light effect that we described previously.Therefore, the squeezed light beam emerging from the resonator will contain pulses of negative energy interspersedwith pulses of positive energy in accordance with the quantum squeezing model.

In this example both the negative and positive energy pulses are of ≈10^-15 second duration. We could arrange a setof rapidly rotating mirrors to separate the positive and negative energy pulses from each other. The light beam is tostrike each mirror surface at a very shallow angle while the rotation ensures that the negative energy pulses are reflected at a slightly different angle from the positive energy pulses. A small spatial separation of the two differentenergy pulses will occur at some distance from the rotating mirror. Another system of mirrors will be needed toredirect the negative energy pulses to an isolated location and concentrate them there. The rotating mirror system can actually be implemented via non-mechanical means.

Negative Energy Pulses Generated from Quantum Optical Squeezing. No other way to squeeze light would be to manufacture extremely reliable light pulses containing precisely one, two,or the squeezed (electromagnetic) vacuum state, the energy density is given by: placed within the squeezing cavity, and a laser beam is directed through the gas. The beam is reflected back onitself by a mirror to form a standing wave within the sodium chamber. This wave causes rapid variations in theoptical properties of the sodium thus causing rapid variations in the squeezed light so that we can induce rapid reflections of pulses by careful design.

On another note, when a quantum state is close to a squeezed vacuum state, there will almost always be some negative energy densities present, and the fluctuations in start to become nearly as large as the expectation value itself.

Observing Negative Energy in Lab:

Negative energy should be observable in lab experiments. The negative energy regions in space  is predicted to produce a unique signature corresponding to lensing, chromaticity and intensity effects in micro- and macro-lensing events on galactic and extragalactic/cosmological scales. It has been shown that these effects provide a specific signature that allows for discrimination between ordinary (positive mass-energy) and negative mass energy lenses via the spectral analysis of astronomical lensing events. Theoretical modeling of negative energylensing effects has led to intense astronomical searches for naturally occurring traversable wormholes in the universe. Computer model simulations and comparison of their results with recent satellite observations of gamma ray bursts (GRBs) has shown that putative negative energy (i.e., traversable wormhole)lensing events very closely resembles the main features of some GRBs. Current observational data suggests that large amounts of naturally occurring “exotic mass-energy” must have existed sometime between the epoch of galaxy formation and the present in order to (properly) quantitatively account for the“age-of-the-oldest-stars-in-the-galactic halo” problem and the cosmological evolution parameters.

When background light rays strike a negative energy lensing region, they are swept out of  the central region thus creating an umbra region of zero intensity. At the edges of the umbra the rays accumulate and create a rainbow-like caustic with enhanced light intensity. The lensing of a negative mass-energy region is not analogous to a diverging lens because in certain circumstances it can produce more light enhancement than does the lensing of an equivalent positive mass-energy region. Real background sources in lensing events can have non-uniform brightness distributions on their surfaces and a dependency of their emission with the observing frequency. These complications can result in chromaticity effects, i.e. in spectral changes induced by differential lensing during the event. The modeling of such effects is quite lengthy, somewhat model dependent, and with recent application only to astronomical lensing events. Suffice it to say that future work is necessary to scale down the predicted lensing parameters and characterize their effects for lab experiments in which the negative energy will not be of astronomical magnitude. Present ultrahigh-speed optics and optical cavities, lasers, photonic crystal (and switching)technology, sensitive nano-sensor technology, and other techniques are very likely capable of detecting the very small magnitude lensing effects expected in lab experiments.

Thus, it can be concluded as fact that negative energy can be generated in lab and will be no more referred as mere ‘fiction’. In a recent work it was suggested that naturally occurring wormholes can be detected since they attract charged particles which in turn, creates magnetic field. Well, I’ll review it and would present a model for realistic Warp Drive’.

[REF: Experimental Concepts for Generating Negative Energy in the Laboratory BY E. W. Devis and H. E. Puthoff]

Growing Crops on Other Planets

Science fiction lovers aren’t the only ones captivated by the possibility of colonizing another planet. Scientists are engaging in numerous research projects that focus on determining how habitable other planets are for life. Mars, for example, is revealing more and more evidence that it probably once had liquid water on its surface, and could one day become a home away from home for humans. 

“The spur of colonizing new lands is intrinsic in man,” said Giacomo Certini, a researcher at the Department of Plant, Soil and Environmental Science (DiPSA) at the University of Florence, Italy. “Hence expanding our horizon to other worlds must not be judged strange at all. Moving people and producing food there could be necessary in the future.” 

Humans traveling to Mars, to visit or to colonize, will likely have to make use of resources on the planet rather than take everything they need with them on a spaceship. This means farming their own food on a planet that has a very different ecosystem than Earth’s. Certini and his colleague Riccardo Scalenghe from the University of Palermo, Italy, recently published a study in Planetary and Space Science that makes some encouraging claims. They say the surfaces of Venus, Mars and the Moon appear suitable for agriculture. 

Defining Soil 

The surface of Venus, generated here using data from NASA’s Magellan mission, undergoes resurfacing through weathering processes such as volcanic activity, meteorite impacts and wind erosion. Credit: NASA

Before deciding how planetary soils could be used, the two scientists had to first explore whether the surfaces of the planetary bodies can be defined as true soil. 

“Apart from any philosophical consideration about this matter, definitely assessing that the surface of other planets is soil implies that it ‘behaves’ as a soil,” said Certini. “The knowledge we accumulated during more than a century of soil science on Earth is available to better investigate the history and the potential of the skin of our planetary neighbors.” 

One of the first obstacles in examining planetary surfaces and their usefulness in space exploration is to develop a definition of soil, which has been a topic of much debate. 

“The lack of a unique definition of ‘soil,’ universally accepted, exhaustive, and (one) that clearly states what is the boundary between soil and non-soil makes it difficult to decide what variables must be taken into account for determining if extraterrestrial surfaces are actually soils,” Certini said. 

At the proceedings of the 19th World Congress of Soil Sciences held in Brisbane, Australia, in August, Donald Johnson and Diana Johnson suggested a “universal definition of soil.” They defined soil as “substrate at or near the surface of Earth and similar bodies altered by biological, chemical, and/or physical agents and processes.” 

The surface of the Moon is covered by regolith over a layer of solid rock. Credit: NASA

On Earth, five factors work together in the formation of soil: the parent rock, climate, topography, time and biota (or the organisms in a region such as its flora and fauna). It is this last factor that is still a subject of debate among scientists. A common, summarized definition for soil is a medium that enables plant growth. However, that definition implies that soil can only exist in the presence of biota. Certini argues that soil is material that holds information about its environmental history, and that the presence of life is not a necessity. 

“Most scientists think that biota is necessary to produce soil,” Certini said. “Other scientists, me included, stress the fact that important parts of our own planet, such as the Dry Valleys of Antarctica or the Atacama Desert of Chile, have virtually life-free soils. They demonstrate that soil formation does not require biota.” 

The researchers of this study contend that classifying a material as soil depends primarily on weathering. According to them, a soil is any weathered veneer of a planetary surface that retains information about its climatic and geochemical history. 

On Venus, Mars and the Moon, weathering occurs in different ways. Venus has a dense atmosphere at a pressure that is 91 times the pressure found at sea level on Earth and composed mainly of carbon dioxide and sulphuric acid droplets with some small amounts of water and oxygen. The researchers predict that weathering on Venus could be caused by thermal process or corrosion carried out by the atmosphere, volcanic eruptions, impacts of large meteorites and wind erosion. 

Using the method of aeroponics, space travelers will be able to grow their own food without soil and using very little water. Credit: NASA

Mars is currently dominated by physical weathering caused by meteorite impacts and thermal variations rather than chemical processes. According to Certini, there is no active volcanism that affects the martian surface but the temperature difference between the two hemispheres causes strong winds. Certini also said that the reddish hue of the planet’s landscape, which is a result of rusting iron minerals, is indicative of chemical weathering in the past. 

On the Moon, a layer of solid rock is covered by a layer of loose debris. The weathering processes seen on the Moon include changes created by meteorite impacts, deposition and chemical interactions caused by solar wind, which interacts with the surface directly. 

Some scientists, however, feel that weathering alone isn’t enough and that the presence of life is an intrinsic part of any soil. 

“The living component of soil is part of its unalienable nature, as is its ability to sustain plant life due to a combination of two major components: soil organic matter and plant nutrients,” said Ellen Graber, researcher at the Institute of Soil, Water and Environmental Sciences at The Volcani Center of Israel’s Agricultural Research Organization. 

One of the primary uses of soil on another planet would be to use it for agriculture—to grow food and sustain any populations that may one day live on that planet. Some scientists, however, are questioning whether soil is really a necessary condition for space farming. 

Soilless Farming – Not Science Fiction 

With the Earth’s increasing population and limited resources, scientists are searching for habitable environments on places such as Mars, Venus and the Moon as potential sites for future human colonies. Credit: NASA

Growing plants without any soil may conjure up images from a Star Trek movie, but it’s hardly science fiction. Aeroponics, as one soilless cultivation process is called, grows plants in an air or mist environment with no soil and very little water. Scientists have been experimenting with the method since the early 1940s, and aeroponics systems have been in use on a commercial basis since 1983. 

“Who says that soil is a precondition for agriculture?” asked Graber. “There are two major preconditions for agriculture, the first being water and the second being plant nutrients. Modern agriculture makes extensive use of ‘soilless growing media,’ which can include many varied solid substrates.” 

In 1997, NASA teamed up with AgriHouse and BioServe Space Technologies to design an experiment to test a soilless plant-growth system on board the Mir Space Station. NASA was particularly interested in this technology because of its low water requirement. Using this method to grow plants in space would reduce the amount of water that needs to be carried during a flight, which in turn decreases the payload. Aeroponically-grown crops also can be a source of oxygen and drinking water for space crews. 

“I would suspect that if and when humankind reaches the stage of settling another planet or the Moon, the techniques for establishing soilless culture there will be well advanced,” Graber predicted. 

Soil: A Key to the Past and the Future 

The Mars Phoenix mission dug into the soil of Mars to see what might be hidden just beneath the surface. Credit:NASA/JPL-Caltech/University of Arizona/Texas A&M University

The surface and soil of a planetary body holds important clues about its habitability, both in its past and in its future. For example, examining soil features have helped scientists show that early Mars was probably wetter and warmer than it is currently. 

“Studying soils on our celestial neighbors means to individuate the sequence of environmental conditions that imposed the present characteristics to soils, thus helping reconstruct the general history of those bodies,” Certini said. 

In 2008, NASA’s Phoenix Mars Lander performed the first wet chemistry experiment using martian soil. Scientists who analyzed the data said the Red Planet appears to have environments more appropriate for sustaining life than was expected, environments that could one day allow human visitors to grow crops. 

“This is more evidence for water because salts are there,” said Phoenix co-investigator Sam Kounaves of Tufts University in a press release issued after the experiment. “We also found a reasonable number of nutrients, or chemicals needed by life as we know it.” 

Researchers found traces of magnesium, sodium, potassium and chloride, and the data also revealed that the soil was alkaline, a finding that challenged a popular belief that the martian surface was acidic. 

This type of information, obtained through soil analyses, becomes important in looking toward the future to determine which planet would be the best candidate for sustaining human colonies.

[Credit: Astrobiology Magazine]

Promising Applications of Carbon Nano Tubes(CNT)

Scientific research on carbon nanotubes witnessed a large expansion. The fact that CNTs can be produced by relative simple synthesis processes and their record breaking properties lead to numerous demonstration of many different types of applications ranging from building fast field effect transistors, flat screens, transparent electrodes, electrodes for rechargeable batteries, conducting polymer composites, bullet proof textiles and transparent loudspeakers.

As a result we have seen enormous progress in controlling growth of CNTs. CNTs with controlled diameter can be grown along a given direction parallel to the surface or perpendicular to the surface. In recent years we have seen narrow diameter CNTs with 2 and more walls to be grown at high yield.

Behind this progress one might forget about the remaining tantalizing challenges. Samples of CNTs still contain a large amount of disordered forms of carbon, catalytic metal particles or parts of the growth support are still a large fraction of the carbon nanotube mass. CNT as produced continue to contain a relative wide dispersion of diameters and lengths. To disperse CNTs and control their distribution in a matrix or on a given surface is still a challenge. There has been enormous progress in size selecting CNTs. However, the often applied techniques limit application to due to the presence of surfactant molecules or cannot be applied to larger volumes.

Analytical techniques are playing an important role when developing new synthesis, purification and separation processes. Screening of carbon nanotubes is essential for any real world application but is also essential for their fundamental understanding such as the understanding the effect of tube bundling, doping and the role of defects.

At the ‘Centre for materials elaboration and structural studies‘, Professor Wolfgang Bacsa and Pascal Puech and have much focused in screening CNTs with optical methods and developing physical processes for carbon nanotubes working closely with the materials chemists at different local institutions. We have much focused our attention on double wall carbon nanotubes grown form the catalytic chemical vapour deposition technique.

Their small diameter, high electrical conductivity and their large length as well as the fact that the inner wall is protected from the environment by the outer wall, are all good attributes for incorporating them in polymer composites. Depending on the synthesis process used we find the two walls are at times strongly or weakly coupled.

By studying their Raman spectra at high pressure, in acids, strongly photo excited or on individual tubes we can observe the effect on the internal and external walls. A good knowledge of Raman spectra of double wall CNTs gives us the opportunity to map the Raman signal of the ultra thin slices of composites and determine the distribution, agglomeration state and interaction with the matrix.


TEM images (V Tishkova CEMES-CNRS) of double wall (a), industrial multiwall carbon nanotubes (b) and Raman G band of double wall CNTs at high pressure in different pressure media revealing molecular nanoscal pressure effects7.

Working on individual CNTs in collaboration with Anna Swan of Boston University, gave us the opportunity to work on precisely positioned individual suspended CNTs. An individual CNT is an ideal point source and this can be used to map out the focal spot and to learn about the fundamental limitations of high resolution grating spectrometers.

The field of carbon nanotube research has grown enormously during the last decade making it difficult to follow all the new results in this field. It is quite clear that applications where macroscopic amounts of CNTs are needed, standardisation of measurement protocols, classification of CNT samples, combined with new processing techniques to deal with large CNT volumes will be needed. Applications where only minute quantities on a surface are used, suffer from the fact that no parallel processing is still limited. This shows further progress in growing CNT on surfaces is still needed although they have been a recent break through in growing CNT in a parallel fashion and with preferential seminconducting or metallic tubes.

[SOURCE: azonano]

Searching For Alien Life[Part-I] : Designing Organic Explorer

Habitable zone relative to size of stars

Image via Wikipedia

We probably already have the technology to find evidence of extraterrestrial life and to even send out evidence of our own. Given the room for a hard thinktank exercise, it only becomes just wishful thinking to contact aliens. But really the case is similar? Seti dictators are currently inclined to view alien hunting as contacting dogs by barking. Imagine a species of dog is trying to contact another species of dogs. How would they do it? By barking or howling, right? Would we notice that, as a signal of that type? Would we care? How much smarter, given some theoretical maximal potential, are we than dogs? WOW signal which was a odd and only of that type, detected in 1977, was ignored and regarded as uncredulous since it was never repeated since then. Was that a signal from aliens theoretically of the development level conforming to us?

In this series of articles(Searching For Alien Life), I’ll delve much into the chasm of infinite possibilities of intelligent extraterrestrial beings out there and would propose some groundbreaking and mind boggling technologies to search for alien life which are, however not so muddling. I may return with some old propositions of mine with new exotic supporting adherents to the tactics.

Current research seeks to understand how complexity arises from simplicity. Much progress has been made in the past few decades, but a good appreciation for some of the most important chemical steps that led to life still eludes us. That’s because life itself is extraordinarily complex, much more so than galaxies, stars, or planets. Consider for a moment the simplest known protein on the Earth. This is insulin, which has 51 amino acids linked in a specific order along a chain. Probability theory can be used to estimate the chances of assembling the correct number and order of amino acids for such a protein molecule. Since there are 20 different types of amino acids, the answer is 1/20^51, which equals ~1/10^66. This means thatthe 20 amino acids must be randomly assembled 1066, or a million trillion trillion trillion trillion trillion, times before getting insulin. This is obviously a great many combinations, so many in fact that we could randomly assemble the 20 aminoacids trillions of times per second for the entire history of the Universe and still not achieve the correct ordering of this protein. Larger proteins and nucleic acids would be even less probable if chemical evolution operates at random. And to assemble a human being would be vastly less probable, if it happened by chance starting only with atoms or simple molecules.
This is the type of reasoning used by some researchers to argue that we must be alone, or nearly so, in the Universe. They suggest that biology of any kind is a highly unlikely phenomenon. They argue that meaningful molecular complexity can be expected at only a very, very few locations in the Universe, and that Earth is one of these special places. And since, in their view, the fraction of habitable planets on which life arises is extremely small and intelligent beings almost improbable. All if their arguments are correct, we should be alone logically. Of all the myriad galaxies, stars, planets, and other wonderful aspects of the Universe, this viewpoint maintains that we are among very few creatures to appreciate the grandeur of it all.

Simulations that resemble conditions on primordial Earth are now routinely performed with a variety of energies and initial reactants (provided there’sno free oxygen). These experiments demonstrate that unique (or even rare) conditions are unnecessary to produce the precursors of life. Complex acids, bases, and proteinoid compounds are formed under a rather wide variety of physical conditions. And it doesn’t take long for these reasonably complex molecules to form, not nearly as long as probability theory predicts by randomly assembling atoms. Furthermore, every time this type of experiment is done, the results are much the same. The oily organic matter trapped in the test tube always yields the same proportion of acids, bases and rich proteinoids. If chemical evolution were entirely random, we might expect a different result each time the experiment is run. Apparently, electromagnetic forces do govern the complex interactions of the many atoms and molecules in the soupy sea, substituting organization for randomness. Of course, precursors of proteins and nucleic acids are a long way from life itself. But the beginnings of life as we know it seem to be the product of less-than-random interactions between atoms and molecules. This point of view is important to accumulate the possibility of radically different biorgasms in a typical alienated environment.

Alien Hunt
Current SETI methodologies implied to search for extraterrestrial life are abysmal and much peeking out. I’ve already described the probable guaranteed failure of contact through radio signal. It is better to send out a probe lassed with organic explorers.

SuperConductor of The Future

Futuristic ideas for the use of superconductors, materials that allow electric current to flow without resistance, are myriad: long-distance, low-voltage electric grids with no transmission loss; fast, magnetically levitated trains; ultra-high-speed supercomputers; superefficient motors and generators; inexhaustible fusion energy – and many others, some in the experimental or demonstration stages.

But superconductors, especially superconducting electromagnets, have been around for a long time. Indeed the first large-scale application of superconductivity was in particle-physics accelerators, where strong magnetic fields steer beams of charged particles toward high-energy collision points.

Accelerators created the superconductor industry, and superconducting magnets have become the natural choice for any application where strong magnetic fields are needed – for magnetic resonance imaging (MRI) in hospitals, for example, or for magnetic separation of minerals in industry. Other scientific uses are numerous, from nuclear magnetic resonance to ion sources for cyclotrons.

[Image Details: A close-up view of a superconducting magnet coil designed at Berkeley Lab, which may be used in a new kind of high-field dipole magnet for a future energy upgrade of CERN’s Large Hadron Collider. (Photo Roy Kaltschmidt) ]

Some of  the strongest and most complex superconducting magnets are still built for particle accelerators like CERN’s Large Hadron Collider (LHC). The LHC uses over 1,200 dipole magnets, whose two adjacent coils of superconducting cable create magnetic fields that bend proton beams traveling in opposite directions around a tunnel 27 kilometers in circumference; the LHC also has almost 400 quadrupole magnets, whose coils create a field with four magnetic poles to focus the proton beams within the vacuum chamber and guide them into the experiments.

These LHC magnets use cables made of superconducting niobium titanium (NbTi), and for five years during its construction the LHC contracted for more than 28 percent of the world’s niobium titanium wire production, with significant quantities of NbTi also used in the magnets for the LHC’s giant experiments.

What’s more, although the LHC is still working to reach the energy for which it was designed, the program to improve its future performance is already well underway.

Designing the future

Enabling the accelerators of the future depends on developing magnets with much greater field strengths than are now possible. To do that, we’ll have to use different materials.

Field strength is limited by the amount of current a magnet coil can carry, which in turn depends on physical properties of the superconducting material such as its critical temperature and critical field. Most superconducting magnets built to date are based on NbTi, which is a ductile alloy; the LHC dipoles are designed to operate at magnetic fields of about eight tesla, or 8 T. (Earth’s puny magnetic field is measured in mere millionths of a tesla.)

The LHC Accelerator Research Program (LARP) is a collaboration among DOE laboratories that’s an important part of U.S. participation in the LHC. Sabbi heads both the Magnet Systems component of LARP and Berkeley Lab’s Superconducting Magnet Program. These programs are currently developing accelerator magnets built with niobium tin (Nb3Sn), a brittle material requiring special fabrication processes but able to generate about twice the field of niobium titanium. Yet the goal for magnets of the future is already set much higher.

Among the most promising new materials for future magnets are some of the high-temperature superconductors. Unfortunately they’re very difficult to work with.” One of the most promising of all is the high-temperature superconductor Bi-2212 (bismuth strontium calcium copper oxide).

[Image Details: In the process called “wind and react,” Bi-2212 wire – shown in cross section, upper right, with the powdered superconductor in a matrix of silver – is woven into flat cables, the cables are wrapped into coils, and the coils are gradually heated in a special oven (bottom).]

“High temperature” is a relative term. It commonly refers to materials that become superconducting above the boiling point of liquid nitrogen, a toasty 77 kelvin (95 K, or -321 degrees Fahrenheit). But in high-field magnets even high-temperature superconductors will be used at low temperatures. Bi-2212 shows why: although it becomes superconducting at 95 K, its ability to carry high currents and thus generate a high magnetic field increases as the temperature is lowered, typically down to 4.2 K, the boiling point of liquid helium at atmospheric pressure.

In experimental situations Bi-2212 has generated fields of 25 T and could go much higher. But like many high-temperature superconductors Bi-2212 is not a metal alloy but a ceramic, virtually as brittle as a china plate.

As part of the Very High Field Superconducting Magnet Collaboration, which brings together several national laboratories, universities, and industry partners, Berkeley Lab’s program to develop new superconducting materials for high-field magnets recently gained support from the American Recovery and Reinvestment Act (ARRA).

Under the direction of Daniel Dietderich and Arno Godeke, AFRD’s Superconducting Magnet Program is investigating Bi-2212 and other candidate materials. One of the things that makes Bi-2212 promising is that it is now available in the form of round wires.

“The wires are essentially tubes filled with tiny particles of ground-up Bi-2212 in a silver matrix,” Godeke explains. “While the individual particles are superconducting, the wires aren’t – and can’t be, until they’ve been heat treated so the individual particles melt and grow new textured crystals upon cooling – thus welding all of the material together in the right orientation.”

Orientation is important because Bi-2212 has a layered crystalline structure in which current flows only through two-dimensional planes of copper and oxygen atoms. Out of the plane, current can’t penetrate the intervening layers of other atoms, so the copper-oxygen planes must line up if current is to move without resistance from one Bi-2212 particle to the next.

In a coil fabrication process called “wind and react,” the wires are first assembled into flat cables and the cables are wound into coils. The entire coil is then heated to 888 degrees Celsius (888 C) in a pure oxygen environment. During the “partial melt” stage of the reaction, the temperature of the coil has to be controlled to within a single degree. It’s held at 888 C for one hour and then slowly cooled.

Silver is the only practical matrix material that allows the wires to “breathe” oxygen during the reaction and align their Bi-2212 grains. Unfortunately 888 C is near the melting point of silver, and during the process the silver may become too soft to resist high stress, which will come from the high magnetic fields themselves: the tremendous forces they generate will do their best to blow the coils apart. So far, attempts to process coils have often resulted in damage to the wires, with resultant Bi-2212 current leakage, local hot spots, and other problems. Dietderich says:

The goal of the program to develop Bi-2212 for high-field magnets is to improve the entire suite of wire, cable, coil making, and magnet construction technologies, magnet technologies are getting close, but the wires are still a challenge. For example, we need to improve current density by a factor of three or four.

Once the processing steps have been optimized, the results will have to be tested under the most extreme conditions. Instead of trying to predict coil performance from testing a few strands of wire and extrapolating the results, we need to test the whole cable at operating field strengths. To do this we employ subscale technology: what we can learn from testing a one-third scale structure is reliable at full scale as well.

Testing the results

The LD1 test magnet design in cross section. The 100 by 150 millimeter rectangular aperture, center, is enclosed by the coils, then by iron pressure pads, and then by the iron yoke segments. The outer diameter of the magnet is 1.36 meters.

Enter the second part of ARRA’s support for future magnets, directed at the Large Dipole Testing Facility. 

“The key element is a test magnet with a large bore, 100 millimeters high by 150 millimeters wide – enough to insert pieces of cable and even miniature coils, so that we can test wires and components without having to build an entire magnet every time,” says AFRD’s Paolo Ferracin, who heads the design of the Large Dipole test magnet.

Called LD1, the test magnet will be based on niobium-tin technology and will exert a field of up to 15 T across the height of the aperture. Inside the aperture, two cable samples will be arranged back to back, running current in opposite directions to minimize the forces generated by interaction between the sample and the external field applied by LD1.

The magnet itself will be about two meters long, mounted vertically in a cryostat underground. LD1’s coils will be cooled to 4.5 K, but a separate cryostat in the bore will allow samples to be tested at temperatures of 10 to 20 K.

“There are two aspects to the design of LD1,” says Ferracin. “The magnetic design deals with how to put the conductors around the aperture to get the field you want. Then you need a support structure to deal with the tremendous forces you create, which is a matter of mechanical design.” LD1 will generate horizontal forces equivalent to the weight of 10 fully loaded 747s; imagine hanging them all from a two-meter beam and requiring that the beam not move more than a tenth of a millimeter.

At top, two superconducting coils enclose a beam pipe. Field strength is indicated by color, with greatest strength in deep red. To test components of such an arrangement, subscale coils (bottom) will be assessed, starting with only half a dozen cable winds generating a modest two or three tesla, increasing to hybrid assemblies capable of generating up to 10 T.

Since one of the most important aspects of cables and model coils is their behavior under stress, we need to add mechanical pressure up to 200 megapascals” – 30,000 pounds per square inch. “We have developed clamping structures that can provide the required force, but devising a mechanism that can apply the pressure during a test will be another major challenge.

The cable samples and miniature coils will incorporate built-in voltage taps, strain gauges, and thermocouples so their behavior can be checked under a range of conditions, including quenches – sudden losses of superconductivity and the resultant rapid heating, as dense electric currents are dumped into conventional conductors like aluminum or copper. The design of the LD1 is based on Berkeley Lab’s prior success building high-field dipole magnets, which hold the world’s record for high-energy physics uses. The new test facility will allow testing the advanced designs for conductors and magnets needed for future accelerators like the High-Energy LHC and the proposed Muon Collider.

These magnets are being developed to make the highest-energy colliders possible. But as we have seen in the past, the new technology will benefit many other fields as well, from undulators for next-generation light sources to more compact medical devices. ARRA’s support for LD1 is an investment in the nation’s science and energy future.

[Source: Berkley Lab]

%d bloggers like this: