Black Holes Serving as Particle Accelerator

A particle collision at the RHIC. No strangele...

Image via Wikipedia

Black Hole Particle Accelerator!! Sounds so strange !! Well, this is not that much strange as it may purport. Particle accelerators are devices which are generally used to raise the particles at very high energy levels.

Beams of high-energy particles are useful for both fundamental and applied research in the sciences, and also in many technical and industrial fields unrelated to fundamental research. It has been estimated that there are approximately 26,000 accelerators worldwide. Of these, only about 1% are the research machines with energies above 1 GeV (that are the main focus of this article), about 44% are for radiotherapy, about 41% for ion implantation, about 9% for industrial processing and research, and about 4% for biomedical and other low-energy research.

For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as essentially 2-body interactions of the quarks and gluons of which they are composed. Thus elementary particle physicists tend to use machines creating beams of electrons, positrons, protons, and anti-protons, interacting with each other or with the simplest nuclei (e.g., hydrogen or deuterium) at the highest possible energies, generally hundreds of GeV or more. Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure, interactions, and properties of the nuclei themselves, and of condensed matter at extremely high temperatures and densities, such as might have occurred in the first moments of the Big Bang. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon.

[Image Details: A typical Cyclotron]

In current we accelerate particles at high energy levels by increasing the kinetic  energy of a particle and applying a very high electromagnetic field. Particles are accelerated according to Lorentz Force. However there are some limitations of such particle accelerators like we can’t accelerate them at very high energy levels. It needs a lot of  distance to be covered up before acquiring a desired speed.

This can be simply accumulated from the astounding details of LHC. The precise circumference of the LHC accelerator is 26 659 m, with a total of 9300 magnets inside. Not only is the LHC the world’s largest particle accelerator, just one-eighth of its cryogenic distribution system would qualify as the world’s largest fridge. It can accelerate particles upto the energy level of 14.0 TeV.

As it is pretty obvious that to accelerate particles above this energy level it would become almost imposssible.

An advanced civilization with the development level of typeIII or type IV would more likely choose to implement black holes rather than engineering a LHC or Tevatron at astrophysical scale. Kaluza Klein black holes are excellent for this purpose. Kaluza Klein black holes are very similar to Kerr black holes except they are charged.

Kerr Black Holes

Kerr spacetime is the unique explicitly defined model of the gravitational field of a rotating star. The spacetime is fully revealed only when the star collapses, leaving a black hole — otherwise the bulk of the star blocks exploration. The qualitative character of Kerr spacetime depends on its mass and its rate of rotation, the most interesting case being when the rotation is slow. (If the rotation stops completely, Kerr spacetime reduces to Schwarzschild spacetime.)

The existence of black holes in our universe is generally accepted — by now it would be hard for astronomers to run the universe without them. Everyone knows that no light can escape from a black hole, but convincing evidence for their existence is provided their effect on their visible neighbors, as when an observable star behaves like one of a binary pair but no companion is visible.

Suppose that, travelling our spacecraft, we approach an isolated, slowly rotating black hole. It can then be observed as a black disk against the stars of the background sky. Explorers familiar with the Schwarzschild black holes will refuse to cross its boundary horizon. First of all, return trips through a horizon are never possible, and in the Schwarzschild case, there is a more immediate objection: after the passage, any material object will, in a fraction of a second, be devoured by a singularity in spacetime.

If we dare to penetrate the horizon of this Kerr black hole we will find … another horizon. Behind this, the singularity in spacetime now appears, not as a central focus, but as a ring — a circle of infinite gravitational forces. Fortunately, this ring singularity is not quite as dangerous as the Schwarzschild one — it is possible to avoid it and enter a new region of spacetime, by passing through either of two “throats” bounded by the ring (see The Big Picture).

In the new region, escape from the ring singularity is easy because the gravitational effect of the black hole is reversed — it now repels rather than attracts. As distance increases, this negative gravity weakens, just as on the positive side, until its effect becomes negligible.

A quick departure may be prudent, but will prevent discovery of something strange: the ring singularity is the outer equator of a spatial solid torus that is, quite simply, a time machine. Travelling within it, one can reach arbitrarily far back into the past of any entity inside the double horizons. In principle you can arrange a bridge game, with all four players being you yourself, at different ages. But there is no way to meet Julius Caesar or your (predeparture) childhood self since these lie on the other side of two impassable horizons.

This rough description is reasonably accurate within its limits, but its apparent completeness is deceptive. Kerr spacetime is vaster — and more symmetrical. Outside the horizons, it turns out that the model described above lacks a distant past, and, on the negative gravity side, a distant future. Harder to imagine are the deficiencies of the spacetime region between the two horizons. This region definitely does not resemble the Newtonian 3-spacebetween two bounding spheres, furnished with a clock to tell time. In it, space and time are turbulently mixed. Pebbles dropped experimentally there can simply vanish in finite time — and new objects can magically appear.

Recently, it was made an interesting observation that black holes can accelerate particles up to unlimited energies Ecm in the centre of mass frame. These results have been obtained for the Kerr metric (they were also extended to the extremal Kerr-Newman one. It was demonstrated that the effect in question exists in a generic black hole background (so a black hole can be surrounded by matter) provided a black hole is rotating. Thus, rotation seemed to be an essential part of the effect. It is also necessary that one of colliding particles have the angular momentum L1 = E1/ωH  where E is the energy, ωH is the angular velocity of a generic rotating black hole. If  ωH→0, L1 →1, so for any particles  with finite L the effect becomes impossible. Say, in the Schwarzschild space-time, the ratio Ecm/m (m is the mass of particles) is finite and cannot exceed 2√5 for particles coming from infinity.

Meanwhile, sometimes the role played by the angular momentum and rotation, is effectively modeled by the electric charge and potential in the spherically-symmetric space-times. So, one may ask the question: can we achieve the infinite acceleration without rotation, simply due to the presence of the electric charge? Apart from interest on its own., the positive answer would be also important in that spherically-symmetric space-times are usually much simpler and admit much more detailed investigation, mimicking relevant features of rotating space-times. In a research paper by Oleg B. Zaslavskii, they showed that centre of mass energy can reach reach up to very high level and may gain almost infinite centre of mass energy before collision. Following the analysis and energy equations ,the answer is ‘Yes!’ .
The similar conclusion were also extracted by Pu Zion Mao in a research paper ‘Kaluza-Klein Balck holes Serving as Particle Acclerator.
Consider two mass particles are falling into a black hole having angular momentum of L1 and L2.

Obviously, plot r  and centre of mass energy near horizon of Kaluza Klein Black Hole in Fig.1 and Fig.2, from which we can see that there exists a critical
angular momentum Lc = 2μ/√(1-ν²)  for the geodesics of particle to reach the horizon. If L > Lc, the geodesics never reach the horizon. On the other hand, if the angular momentum is too small, the particle will fall into the black hole and the CM energy for the collision is limited. However, when L1 or L2 takes the angular momentum L = 2μ/√(1-ν²) , the CM energy is unlimited with no restrictions on the angular momentum per unit mass J/M of the black hole.
Now, it seems very mesmerizing that advanced alien civilization would more likely prefer to implement black holes as particle accelerator. However, that implementation could be moderated.

Dark Flow, Gravity and Love

By Rob Bryanton

The above video is tied to a previous blog entry from last January of the same name, Dark Flow.

Last time, in Placebos and Biocentrism, we returned to the idea that so much of what we talk about with this project is tied to visualizing our reality from “outside” of spacetime, a perspective that many of the great minds of the last hundred years have also tried to get us to embrace. Here’s a quote I just came across from Max Planck that I think is particularly powerful:

As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter.

Image-Golden ratio line

Image via Wikipedia

It’s so easy to look at some of the phrases from this quote and imagine them on some new age site, where mainstream scientists would then smugly dismiss these ideas as hogwash from crackpots. Like me, folks like Dan Winter and Nassim Haramein also sometimes get painted with the crackpot brush, but they are both serious about the ideas they are exploring, and they are not far away from the ideas that Max Planck promoted above, or that I have been pursuing with my project.

On July 1st of this year, I published a well-received blog entry called Love and Gravity. It looked at some new age ideas about wellness and spirituality, and related them to some mainstream science ideas about extra dimensions, timelessness, and the fact that physicists tell us that gravity is the only force which exerts itself across the extra dimensions.

Last week Dan Winter forwarded me a link to a new web page of his which yet again seems to tie into the same viewpoint that I’m promoting: Dan is calling this new page “Gravity is Love“. As usual, this page is a sprawling collection of graphics, animations, and articles, most of which are found on a number of Dan’s other pages, but there’s important new information here as well. Here’s a few paragraphs excerpted from the page which will give you the flavor of what Dan is saying about this concept:

Love really IS the nature of gravity!
First we discovered Golden Ratio identifies the change in pressure over time- of the TOUCH that says I LOVE YOU: goldenmean.info/touch
Then we discovered (with Korotkov’s help) … that the moment of peak perception- bliss – enlightenment- was defined by Golden Ratio in brainwaves :goldenmean.info/clinicalintro
Then medicine discovered: the healthy heart is a fractal heart. ( References/ pictures:goldenmean.info/holarchy, and also: goldenmean.info/heartmathmistake
Then – I pioneered the proof that Golden Ratio perfects fractality because being perfect wave interference it is therefore perfect compression. It is my view that all centripetal forces- like gravity, life, consciousness, and black holes, are CAUSED by Golden Ratio in waves of charge.
Nassim Haramein says that although he sees Golden Ratio emerge from his black hole equations repeatedly – he sees it as an effect of black holes/ gravity – not the cause… Clearly – from the logic of waves – I say the black hole / gravity is the effect of golden ratio and not the other way around!
– although some might say that this is a chicken and egg difference – may be just semantics… at least we agree on the profound importance of Golden RATIO…/ fractality…
AND love :

perfect embedding IS perfect fusion IS perfect compression… ah the romance.

Dan Winter is a fascinating fellow, I hope you can spend some time following the links in the above quote. Next time we’re going to look at another somewhat related approach to imagining the extra-dimensional patterns that link us all together, in an entry called Biosemiotics: Monkeys, Metallica, and Music.

Enjoy the journey!

Rob Bryanton

SuperConductor of The Future

Futuristic ideas for the use of superconductors, materials that allow electric current to flow without resistance, are myriad: long-distance, low-voltage electric grids with no transmission loss; fast, magnetically levitated trains; ultra-high-speed supercomputers; superefficient motors and generators; inexhaustible fusion energy – and many others, some in the experimental or demonstration stages.

But superconductors, especially superconducting electromagnets, have been around for a long time. Indeed the first large-scale application of superconductivity was in particle-physics accelerators, where strong magnetic fields steer beams of charged particles toward high-energy collision points.

Accelerators created the superconductor industry, and superconducting magnets have become the natural choice for any application where strong magnetic fields are needed – for magnetic resonance imaging (MRI) in hospitals, for example, or for magnetic separation of minerals in industry. Other scientific uses are numerous, from nuclear magnetic resonance to ion sources for cyclotrons.

[Image Details: A close-up view of a superconducting magnet coil designed at Berkeley Lab, which may be used in a new kind of high-field dipole magnet for a future energy upgrade of CERN’s Large Hadron Collider. (Photo Roy Kaltschmidt) ]

Some of  the strongest and most complex superconducting magnets are still built for particle accelerators like CERN’s Large Hadron Collider (LHC). The LHC uses over 1,200 dipole magnets, whose two adjacent coils of superconducting cable create magnetic fields that bend proton beams traveling in opposite directions around a tunnel 27 kilometers in circumference; the LHC also has almost 400 quadrupole magnets, whose coils create a field with four magnetic poles to focus the proton beams within the vacuum chamber and guide them into the experiments.

These LHC magnets use cables made of superconducting niobium titanium (NbTi), and for five years during its construction the LHC contracted for more than 28 percent of the world’s niobium titanium wire production, with significant quantities of NbTi also used in the magnets for the LHC’s giant experiments.

What’s more, although the LHC is still working to reach the energy for which it was designed, the program to improve its future performance is already well underway.

Designing the future

Enabling the accelerators of the future depends on developing magnets with much greater field strengths than are now possible. To do that, we’ll have to use different materials.

Field strength is limited by the amount of current a magnet coil can carry, which in turn depends on physical properties of the superconducting material such as its critical temperature and critical field. Most superconducting magnets built to date are based on NbTi, which is a ductile alloy; the LHC dipoles are designed to operate at magnetic fields of about eight tesla, or 8 T. (Earth’s puny magnetic field is measured in mere millionths of a tesla.)

The LHC Accelerator Research Program (LARP) is a collaboration among DOE laboratories that’s an important part of U.S. participation in the LHC. Sabbi heads both the Magnet Systems component of LARP and Berkeley Lab’s Superconducting Magnet Program. These programs are currently developing accelerator magnets built with niobium tin (Nb3Sn), a brittle material requiring special fabrication processes but able to generate about twice the field of niobium titanium. Yet the goal for magnets of the future is already set much higher.

Among the most promising new materials for future magnets are some of the high-temperature superconductors. Unfortunately they’re very difficult to work with.” One of the most promising of all is the high-temperature superconductor Bi-2212 (bismuth strontium calcium copper oxide).

[Image Details: In the process called “wind and react,” Bi-2212 wire – shown in cross section, upper right, with the powdered superconductor in a matrix of silver – is woven into flat cables, the cables are wrapped into coils, and the coils are gradually heated in a special oven (bottom).]

“High temperature” is a relative term. It commonly refers to materials that become superconducting above the boiling point of liquid nitrogen, a toasty 77 kelvin (95 K, or -321 degrees Fahrenheit). But in high-field magnets even high-temperature superconductors will be used at low temperatures. Bi-2212 shows why: although it becomes superconducting at 95 K, its ability to carry high currents and thus generate a high magnetic field increases as the temperature is lowered, typically down to 4.2 K, the boiling point of liquid helium at atmospheric pressure.

In experimental situations Bi-2212 has generated fields of 25 T and could go much higher. But like many high-temperature superconductors Bi-2212 is not a metal alloy but a ceramic, virtually as brittle as a china plate.

As part of the Very High Field Superconducting Magnet Collaboration, which brings together several national laboratories, universities, and industry partners, Berkeley Lab’s program to develop new superconducting materials for high-field magnets recently gained support from the American Recovery and Reinvestment Act (ARRA).

Under the direction of Daniel Dietderich and Arno Godeke, AFRD’s Superconducting Magnet Program is investigating Bi-2212 and other candidate materials. One of the things that makes Bi-2212 promising is that it is now available in the form of round wires.

“The wires are essentially tubes filled with tiny particles of ground-up Bi-2212 in a silver matrix,” Godeke explains. “While the individual particles are superconducting, the wires aren’t – and can’t be, until they’ve been heat treated so the individual particles melt and grow new textured crystals upon cooling – thus welding all of the material together in the right orientation.”

Orientation is important because Bi-2212 has a layered crystalline structure in which current flows only through two-dimensional planes of copper and oxygen atoms. Out of the plane, current can’t penetrate the intervening layers of other atoms, so the copper-oxygen planes must line up if current is to move without resistance from one Bi-2212 particle to the next.

In a coil fabrication process called “wind and react,” the wires are first assembled into flat cables and the cables are wound into coils. The entire coil is then heated to 888 degrees Celsius (888 C) in a pure oxygen environment. During the “partial melt” stage of the reaction, the temperature of the coil has to be controlled to within a single degree. It’s held at 888 C for one hour and then slowly cooled.

Silver is the only practical matrix material that allows the wires to “breathe” oxygen during the reaction and align their Bi-2212 grains. Unfortunately 888 C is near the melting point of silver, and during the process the silver may become too soft to resist high stress, which will come from the high magnetic fields themselves: the tremendous forces they generate will do their best to blow the coils apart. So far, attempts to process coils have often resulted in damage to the wires, with resultant Bi-2212 current leakage, local hot spots, and other problems. Dietderich says:

The goal of the program to develop Bi-2212 for high-field magnets is to improve the entire suite of wire, cable, coil making, and magnet construction technologies, magnet technologies are getting close, but the wires are still a challenge. For example, we need to improve current density by a factor of three or four.

Once the processing steps have been optimized, the results will have to be tested under the most extreme conditions. Instead of trying to predict coil performance from testing a few strands of wire and extrapolating the results, we need to test the whole cable at operating field strengths. To do this we employ subscale technology: what we can learn from testing a one-third scale structure is reliable at full scale as well.

Testing the results

The LD1 test magnet design in cross section. The 100 by 150 millimeter rectangular aperture, center, is enclosed by the coils, then by iron pressure pads, and then by the iron yoke segments. The outer diameter of the magnet is 1.36 meters.

Enter the second part of ARRA’s support for future magnets, directed at the Large Dipole Testing Facility. 

“The key element is a test magnet with a large bore, 100 millimeters high by 150 millimeters wide – enough to insert pieces of cable and even miniature coils, so that we can test wires and components without having to build an entire magnet every time,” says AFRD’s Paolo Ferracin, who heads the design of the Large Dipole test magnet.

Called LD1, the test magnet will be based on niobium-tin technology and will exert a field of up to 15 T across the height of the aperture. Inside the aperture, two cable samples will be arranged back to back, running current in opposite directions to minimize the forces generated by interaction between the sample and the external field applied by LD1.

The magnet itself will be about two meters long, mounted vertically in a cryostat underground. LD1’s coils will be cooled to 4.5 K, but a separate cryostat in the bore will allow samples to be tested at temperatures of 10 to 20 K.

“There are two aspects to the design of LD1,” says Ferracin. “The magnetic design deals with how to put the conductors around the aperture to get the field you want. Then you need a support structure to deal with the tremendous forces you create, which is a matter of mechanical design.” LD1 will generate horizontal forces equivalent to the weight of 10 fully loaded 747s; imagine hanging them all from a two-meter beam and requiring that the beam not move more than a tenth of a millimeter.

At top, two superconducting coils enclose a beam pipe. Field strength is indicated by color, with greatest strength in deep red. To test components of such an arrangement, subscale coils (bottom) will be assessed, starting with only half a dozen cable winds generating a modest two or three tesla, increasing to hybrid assemblies capable of generating up to 10 T.

Since one of the most important aspects of cables and model coils is their behavior under stress, we need to add mechanical pressure up to 200 megapascals” – 30,000 pounds per square inch. “We have developed clamping structures that can provide the required force, but devising a mechanism that can apply the pressure during a test will be another major challenge.

The cable samples and miniature coils will incorporate built-in voltage taps, strain gauges, and thermocouples so their behavior can be checked under a range of conditions, including quenches – sudden losses of superconductivity and the resultant rapid heating, as dense electric currents are dumped into conventional conductors like aluminum or copper. The design of the LD1 is based on Berkeley Lab’s prior success building high-field dipole magnets, which hold the world’s record for high-energy physics uses. The new test facility will allow testing the advanced designs for conductors and magnets needed for future accelerators like the High-Energy LHC and the proposed Muon Collider.

These magnets are being developed to make the highest-energy colliders possible. But as we have seen in the past, the new technology will benefit many other fields as well, from undulators for next-generation light sources to more compact medical devices. ARRA’s support for LD1 is an investment in the nation’s science and energy future.

[Source: Berkley Lab]

Interstellar Transportation: How?[Part-I]

This image illustrates Robert L. Forward's sch...

Image via Wikipedia

Interstellar travel is seemingly impossible within our life time yet we have various technological tactics which could be implemented to it happening before our eyes. Today I stumbled across an excellent paper which explained the intriguing plans in a concise way.

By Dana G. Andrews

Interstellar travel is difficult, but not impossible. The technology to launch slow Interstellar exploration missions, total delta velocities (ΔVs) of a few hundreds of kilometers per second, has been demonstrated in laboratories. However, slow interstellar probes will probably never be launched because no current organization would ever start a project which has no return for thousands of years; especially if it can wait a few dozens of years for improved technology and get the results quicker. One answer to the famous Fermi paradox is that no civilization ever launches colony ships because the colonists are always waiting for faster transportation!

Therefore, the first criteria for a successful interstellar mission is that it must returnresults within the lifetime of the principal investigator, or the average colonist. This is very difficult, but still possible. To obtain results this quick, the probe must beaccelerated to a significant fraction of the speed of light, with resultant kinetic energies of the order of 4 x10^15 joules per kilogram.Not surprisingly, the second criteria for as successful interstellar mission is cost effective energy generation and an efficient means of converting raw energy into directed momentum. In this paper, severalcandidate propulsion systems theoretically capable of delivering probes to nearby starsystems twenty-five to thirty-five years afterlaunch are defined and sized for prospective missions using both current and near termtechnologies.Rockets have limited ΔV capability because they must carry their entire source of energy and propellant. Therefore, they can’t be a probable candidates for interstellar travel. Now one might propose that why not use antimatter rockets? Well, they can’t because in current, we have no mechanism how to quarantine it?

Light Sails

Laser-driven Lightsails are not rockets since the power source remains behind and no propellants are expended. Therefore, the rocket equation doesn’t apply and extremely high ΔVs are possible if adequate laser power canbe focused on the lightsail for a sufficient acceleration time period. The acceleration,Asc , of   a laser-propelled lightsail spacecraft in meters per second is:
Asc = 2PL / Msc
where PL is the laser power impinging on the sail in watts, Ms is the mass of the spacecraft (sail and payload) in kilograms,and c is the speed of light in meters/ second. In practical units, a perfectly reflecting laser lightsail will experience a force of 6.7 newtons for every gigawatt of incident laser power. Herein lies the problem, since extremely high power levels are required to accelerate even small probes at a few gravities.The late Dr. Robert Forward in hispapers on interstellar lightsail missions postulated a 7,200-gigawatt laser to accelerate his 785 ton unmanned probe and a 75,000,000-gigawatt laser to accelerate his 78,500 ton manned vehicle. To achieve velocities of 0.21 c and 0.5 c, respectively,the laser beam must be focused on the sail for literally years at distances out to a couple of light years. In addition, the laser beamwas to be used to decelerate the payload atthe target star by staging the lightsail and using the outer annular portion as a mirror to reflect and direct most of the laser beam back onto the central portion of the lightsail,which does the decelerating. To enable this optical performance, a one thousand kilometer diameter Fresnel lens would be placed fifteen Astronomical Units (AU)beyond the laser and its position relative to the stabilized laser beam axis maintained to within a meter. If the laser beam axis is not stable over hours relative to the fixed background stars (drift <10-12 radians), or if the lens is not maintained within a fraction of a meter of the laser axis; the beam at the spacecraft will wander across the sail fast enough to destabilize the system. While this scenario is not physically impossible, it appears difficult enough to delay any serious consideration of using the large lens/long focus approach to laser-propelled light sails. The alternative approach is to build really large solar-pumped or electrically powered lasers in the million gigawatt range, where we could accelerate a decent size spacecraft to thirty percent the speed of light within a fraction of a light year using more achievable optics (e.g., a reflector 50 kilometers in diameter). Even though space construction projects of this magnitude must be termed highly speculative, the technology required is well understood and LPL systems utilizing dielectric quarter wave Lightsails could accelerate at twenty to thirty meters per second or more.

Laser Propelled Light Sail

The Magsail current loop carries no current during the laser boost and is just a rotating coil of superconducting cable acting as ballast to balance the thrust forces on the dielectric quarter wave reflector. After coast when the spacecraft approaches the target star system the lightsail is jettisoned and the Magsail is allowed to uncoil to its full diameter (80 km for a 2000 kg probe mission). It is then energized either from a onboard reactor or laser illuminated photovoltaic panels and begins its long deceleration. Example interstellar missions have been simulated using state-of-the-art optics designs and the resulting LPL design  characteristics are shown in Table below.
A constant beam power is chosen such that the spacecraft reaches the desired velocity just at the limit of acceleration with fiftykilometer diameter optics. Even though the high-powered LPL appears to meet all mission requirements, this paper explores alternative propulsion systems with potential for significant reductions in power, size, cost, and complexity.

These data show that interstellar exploration is feasible, even with near term technologies, if the right system is selected and enough resources are available. Therefore, once the technology for low cost access to space is available, the primary risk to any organization embarking on a serious effort to develop interstellar exploration/transportation is affordability, not technical feasibility.. The primary issue with respect to any of these systems actually being built is cost, both development cost and operating cost in the price of energy. As for manned exploration, there is a good idea, ride on asteroids- colonize them and go ready for interstellar mission!!

Hyperluminal Travel Without Exotic Matter

Listen terrestrial intelligent species: WeirdSciences is going to delve into a new idea of making interstellar travel feasible and this time no negative energy is to be used to propel spacecraft. Though implementing  negative energy to make warp drive is not that bad, but you need to refresh your mind.

By Eric Baird

Alcubierre’s 1994 paper on hyperfast travel has generated fresh interest in the subject of warp drives but work on the subject of hyper-fast travel is often hampered by confusion over definitions — how do we define times and speeds over extended regions of spacetime where the geometry is not Euclidean? Faced with this problem it may seem natural to define a spaceship’s travel times according to round-trip observations made from the traveller’s point of origin, but this “round-trip” approach normally requires signals to travel in two opposing spatial directions through the same metric, and only gives us an unambiguous reading of apparent shortened journey-times if the signal speeds are enhanced in both directions along the signal path, a condition that seems to require a negative energy density in the region. Since hyper-fast travel only requires that the speed of light-signals be enhanced in the actual direction of travel, we argue that the precondition of bidirectionality (inherited from special relativity, and the apparent source of the negative energy requirement), is unnecessary, and perhaps misleading.

When considering warp-drive problems, it is useful to remind ourselves of what it is that we are trying to accomplish. To achieve hyper-fast package delivery between two physical markers, A (the point of origin)  and B (the destination), we require that a package moved from A to B:

a) . . . leaves A at an agreed time according to clocks at A,
b) . . . arrives at B as early as possible according to clocks at B, and, ideally,
c) . . . measures their own journey time to be as short as possible.

From a purely practical standpoint as “Superluminal Couriers Inc.”, we do not care how long the arrival event takes to be seen back at A, nor do we care whether the clocks at A and B appear to be properly synchronised during the delivery process. Our only task is to take a payload from A at a specified “A time” and deliver it to B at possible “B time”, preferably without the package ageing excessively en route. If we can collect the necessary local time-stamps on our delivery docket at the various stages of the journey, we have achieved our objective and can expect payment from our customer.

Existing approaches tend to add a fourth condition:

d) . . . that the arrival-event at B is seen to occur as soon as possible by an observer back at A.

This last condition is much more difficult to meet, but is arguably more important to our ability to define distant time-intervals than to the actual physical delivery process itself. It does not dictate which events may be intersected by the worldline of the travelling object, but can affect the coordinate labels that we choose to assign to those events using special relativity.

  • Who Asked Your Opinion?

If we introduce an appropriate anisotropy in the speed of light to the region occupied by our delivery path, a package can travel to its destination along the path faster than “nominal background lightspeed” without exceeding the local speed of light along the path. This allows us to meet conditions 2(a) and 2(b), but the same isotropy causes an increased transit time for signals returning from B to A, so the “fast” outward journey can appear to take longer when viewed from A.

This behaviour can be illustrated by the extreme example of a package being delivered to the interior of a black hole from a “normal” region of spacetime. When an object falls through a gravitational event horizon, current theory allows its supposed inward velocity to exceed the nominal speed of light in the external environment, and to actually tend towards vINWARDS=¥ as the object approaches a black hole’s central singularity. But the exterior observer, A, could argue that the delivery is not only proceeding more slowly than usual, but that the apparent delivery time is actually infinite, since the package is never actually seen (by A) to pass through the horizon.

Should A’s low perception of the speed of the infalling object indicate that hyperfast travel has not been achieved? In the author’s opinion, it should not — if the package has successfully been collected from A and delivered to B with the appropriate local timestamps indicating hyperfast travel, then A’s subsequent opinion on how long the delivery is seen to take (an observation affected by the properties of light in a direction other than that of the travelling package) would not seem to be of secondary importance. In our “black hole” example, exotic matter or negative energy densities are not required unless we demand that an external observer should be able to see the package proceeding superluminally, in which case a negative gravitational effect would allow signals to pass back outwards through the r=2M surface to the observer at the despatch-point (without this return path, special relativity will tend to define the time of the unseen delivery event as being more-than-infinitely far into A’s future).

Negative energy density is required here only for appearances sake (and to make it easier for us to define the range of possible arrival-events that would imply that hyperfast travel has occurred), not for physical package delivery.

  • Hyperfast Return Journeys and Verification

It is all very well to be able to reach our destination in a small local time period, and to claim that we have travelled there at hyperfast speeds, but how do we convince others that our own short transit-time measurements are not simply due to time-dilation effects or to an “incorrect” concept of simultaneity? To convince observers at our destination, we only have to ask that they study their records for the observed behaviour of our point of origin — if the warpfield is applied to the entire journey-path (“Krasnikov tube configuration”), then the introduction and removal of the field will be associated with an increase and decrease in the rate at which signals from A arrive at B along the path (and will force special relativity to redefine the supposed simultaneity of events at B and A). If the warpfield only applies in the vicinity of the travelling package, other odd effects will be seen when the leading edge of the warpfield reaches the observer at B (the logical problems associated with the conflictng “lightspeeds” at the leading edge of a travelling warpfield wavefront been highlighted by Low, and will be discussed in a further paper). Our initial definitions of the distances involved should of course be based on measurements taken outside the region of spacetime occupied by the warpfield.

A more convincing way of demonstrating hyper-fast travel would be to send a package from A to B and back again in a shorter period of “A-time” than would normally be required for a round-trip light-beam. We must be careful here not to let our initial mathematical definitions get in the way of our task — although we have supposed that the speed of light towards B was slower while our warpdrive was operating on the outward journey, this artificially-reduced return speed does not have to also apply during our subsequent return trip, since we have the option of simply switching the warpdrive off, or better still, reversing its polarity for the journey home.

Although a single path allowing enhanced signal speeds in both directions at the same time would seem to require a negative energy-density, this feature is not necessary for a hyper-fast round trip — the outward and return paths can be separated in time (with the region having different gravitational properties during the outward and return trips) or in space (with different routes being taken for the outward and return journeys).

  • Caveats and Qualifications

Special relativity is designed around the assumption of Euclidean space and the stipulation that lightspeed is assumed to be isotropic, and neither of these assumptions is reliable for regions of spacetime that contain gravitational fields.

If we have a genuine lightspeed anisotropy that allows an object to move hyper-quickly between A and B, special relativity can respond by using the round-trip characteristics of light along the transit path to redefine the simultaneity of events at both locations, so that the “early” arrival event at B is redefined far enough into A’s future to guarantee a description in which the object is moving at less than cBACKGROUND.
This retrospective redefinition of times easily leads to definitional inconsistencies in warpdrive problems — If a package is sent from A to B and back to A again, and each journey is “ultrafast” thanks to a convenient gravitational gradient for each trip, one could invoke special relativity to declare that each individual trip has a speed less than cBACKGROUND, and then take the ultrafast arrival time of the package back at A as evidence that some form of reverse time travel has occurred , when in fact the apparent negative time component is an artifact of our repeated redefinition of the simultaneity of worldlines at A and B. Since it has been known for some time that similar definitional breakdowns in distant simultaneity can occur when an observer simply alters speed (the “one gee times one lightyear” limit quoted in MTW ), these breakdowns should not be taken too seriously when they reappear in more complex “warpdrive” problems.

Olum’s suggested method for defining simultaneity and hyperfast travel (calibration via signals sent through neighbouring regions of effectively-flat spacetime) is not easily applied to our earlier black hole example, because of the lack of a reference-path that bypasses the gravitational gradient (unless we take a reference-path previous to the formation of the black hole), but warpdrive scenarios tend instead to involve higher-order gravitational effects (e.g. gradients caused by so-called “non-Newtonian” forces ), and in these situations the concept of “relative height” in a gravitational field is often route-dependent (the concept “downhill” becomes a “local” rather than a “global” property, and gravitational rankings become intransitive). For this class of problem, Olum’s approach would seem to be the preferred method.

  • What’s the conclusion?

In order to be able to cross interstellar distances at enhanced speeds, we only require that the speed of light is greater in the direction in which we want to travel, in the region that we are travelling through, at the particular time that we are travelling through it. Although negative energy-densities would seem to be needed to increase the speed of light in both directions along the same path at the same time, this additional condition is only required for any hyperfast travel to be “obvious” to an observer at the origin point, which is a stronger condition than merely requiring that packages be delivered arbitrarily quickly. Hyperfast return journeys would also seem to be legal (along a pair of spatially separated or time-separated paths), as long as the associated energy-requirement is “paid for” somehow. Breakdowns in transitive logic and in the definitions used by special relativity already occur with some existing “legal” gravitational situations, and their reappearance in warpdrive problems is not in itself proof that these problems are paradoxical.

Arguments against negative energy-densities do not rule out paths that allow gravity-assisted travel at speeds greater than cBACKGROUND, provided that we are careful not to apply the conventions of special relativity inappropriately. Such paths do occur readily under general relativity, although it has to be admitted that some of the more extreme examples have a tendency to lead to unpleasant regions (such as the interiors of black holes) that one would not normally want to visit.

[Ref: Miguel Alcubierre, “The warp drive: hyper-fast travel within general relativity,”  Class. Quantum Grav. 11 L73-L77 (1994), Michael Spzir, “Spacetime hypersurfing,”  American Scientist 82 422-423 (Sept/Oct 1994), Robert L. Forward, “Guidelines to Antigravity,”  American Journal of Physics 31 (3) 166-170 (1963). ]

Negative Energy And Interstellar Travel

Can a region of space contain less than nothing? Common sense would say no; the most one could do is remove all matter and radiation and be left with vacuum. But quantum physics has a proven ability to confound intuition, and this case is no exception. A region of space, it turns out, can contain less than nothing. Its energy per unit volume–the energy density–can be less than zero.

Needless to say, the implications are bizarre. According to Einstein’s theory of gravity, general relativity, the presence of matter and energy warps the geometric fabric of space and time. What we perceive as gravity is the space-time distortion produced by normal, positive energy or mass. But when negative energy or mass–so-called exotic matter–bends space-time, all sorts of amazing phenomena might become possible: traversable wormholes, which could act as tunnels to otherwise distant parts of the universe; warp drive, which would allow for faster-than-light travel; and time machines, which might permit journeys into the past. Negative energy could even be used to make perpetual-motion machines or to destroy black holes. A Star Trek episode could not ask for more.

For physicists, these ramifications set off alarm bells. The potential paradoxes of backward time travel–such as killing your grandfather before your father is conceived–have long been explored in science fiction, and the other consequences of exotic matter are also problematic. They raise a question of fundamental importance: Do the laws of physics that permit negative energy place any limits on its behavior?

We and others have discovered that nature imposes stringent constraints on the magnitude and duration of negative energy, which (unfortunately, some would say) appear to render the construction of wormholes and warp drives very unlikely.

Double Negative

Before proceeding further, we should draw the reader’s attention to what negative energy is not.

It should not be confused with antimatter, which has positive energy. When an electron and its antiparticle, a positron, collide, they annihilate. The end products are gamma rays, which carry positive energy. If antiparticles were composed of negative energy, such an interaction would result in a final energy of zero.

One should also not confuse negative energy with the energy associated with the cosmological constant, postulated in inflationary models of the universe. Such a constant represents negative pressure but positive energy.

The concept of negative energy is not pure fantasy; some of its effects have even been produced in the laboratory. They arise from Heisenberg’s uncertainty principle, which requires that the energy density of any electric, magnetic or other field fluctuate randomly. Even when the energy density is zero on average, as in a vacuum, it fluctuates. Thus, the quantum vacuum can never remain empty in the classical sense of the term; it is a roiling sea of “virtual” particles spontaneously popping in and out of existence [see “Exploiting Zero-Point Energy,” by Philip Yam; SCIENTIFIC AMERICAN, December 1997]. In quantum theory, the usual notion of zero energy corresponds to the vacuum with all these fluctuations.

So if one can somehow contrive to dampen the undulations, the vacuum will have less energy than it normally does–that is, less than zero energy.[See, Casimir Starcraft: Zero Point Energy]

  • Negative Energy

Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut. The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.

One problem with Alcubierre’s original model that the interior of the warp bubble is causally disconnected from its forward edge. A starship captain on the inside cannot steer the bubble or turn it on or off; some external agency must set it up ahead of time. To get around this problem, Krasnikov proposed a “superluminal subway,” a tube of modified space-time (not the same as a wormhole) connecting Earth and a distant star. Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube. On the return journey, they could travel through it at warp speed. Like warp bubbles, the subway involves negative energy.

Negative energy is so strange that one might think it must violate some law of physics.

Before and after the creation of equal amounts of negative and positive energy in previously empty space, the total energy is zero, so the law of conservation of energy is obeyed. But there are many phenomena that conserve energy yet never occur in the real world. A broken glass does not reassemble itself, and heat does not spontaneously flow from a colder to a hotter body. Such effects are forbidden by the second law of thermodynamics.

This general principle states that the degree of disorder of a system–its entropy–cannot decrease on its own without an input of energy. Thus, a refrigerator, which pumps heat from its cold interior to the warmer outside room, requires an external power source. Similarly, the second law also forbids the complete conversion of heat into work.

Negative energy potentially conflicts with the second law. Imagine an exotic laser, which creates a steady outgoing beam of negative energy. Conservation of energy requires that a byproduct be a steady stream of positive energy. One could direct the negative energy beam off to some distant corner of the universe, while employing the positive energy to perform useful work. This seemingly inexhaustible energy supply could be used to make a perpetual-motion machine and thereby violate the second law. If the beam were directed at a glass of water, it could cool the water while using the extracted positive energy to power a small motor–providing a refrigerator with no need for external power. These problems arise not from the existence of negative energy per se but from the unrestricted separation of negative and positive energy.

Unfettered negative energy would also have profound consequences for black holes. When a black hole forms by the collapse of a dying star, general relativity predicts the formation of a singularity, a region where the gravitational field becomes infinitely strong. At this point, general relativity–and indeed all known laws of physics–are unable to say what happens next. This inability is a profound failure of the current mathematical description of nature. So long as the singularity is hidden within an event horizon, however, the damage is limited. The description of nature everywhere outside of the horizon is unaffected. For this reason, Roger Penrose of Oxford proposed the cosmic censorship hypothesis: there can be no naked singularities, which are unshielded by event horizons.

For special types of charged or rotating black holes– known as extreme black holes–even a small increase in charge or spin, or a decrease in mass, could in principle destroy the horizon and convert the hole into a naked singularity. Attempts to charge up or spin up these black holes using ordinary matter seem to fail for a variety of reasons. One might instead envision producing a decrease in mass by shining a beam of negative energy down the hole, without altering its charge or spin, thus subverting cosmic censorship. One might create such a beam, for example, using a moving mirror. In principle, it would require only a tiny amount of negative energy to produce a dramatic change in the state of an extreme black hole.

[Image Details: Pulses of negative energy are permitted by quantum theory but only under three conditions. First, the longer the pulse lasts, the weaker it must be (a, b). Second, a pulse of positive energy must follow. The magnitude of the positive pulse must exceed that of the initial negative one. Third, the longer the time interval between the two pulses, the larger the positive one must be – an effect known as quantum interest (c).]

Therefore, this might be the scenario in which negative energy is the most likely to produce macroscopic effects.

Fortunately (or not, depending on your point of view), although quantum theory allows the existence of negative energy, it also appears to place strong restrictions – known as quantum inequalities – on its magnitude and duration.The inequalities bear some resemblance to the uncertainty principle. They say that a beam of negative energy cannot be arbitrarily intense for an arbitrarily long time. The permissible magnitude of the negative energy is inversely related to its temporal or spatial extent. An intense pulse of negative energy can last for a short time; a weak pulse can last longer. Furthermore, an initial negative energy pulse must be followed by a larger pulse of positive energy.The larger the magnitude of the negative energy, the nearer must be its positive energy counterpart. These restrictions are independent of the details of how the negative energy is produced. One can think of negative energy as an energy loan. Just as a debt is negative money that has to be repaid, negative energy is an energy deficit.

In the Casimir effect, the negative energy density between the plates can persist indefinitely, but large negative energy densities require a very small plate separation. The magnitude of the negative energy density is inversely proportional to the fourth power of the plate separation. Just as a pulse with a very negative energy density is limited in time, very negative Casimir energy density must be confined between closely spaced plates. According to the quantum inequalities, the energy density in the gap can be made more negative than the Casimir value, but only temporarily. In effect, the more one tries to depress the energy density below the Casimir value, the shorter the time over which this situation can be maintained.

When applied to wormholes and warp drives, the quantum inequalities typically imply that such structures must either be limited to submicroscopic sizes, or if they are macroscopic the negative energy must be confined to incredibly thin bands. In 1996 we showed that a submicroscopic wormhole would have a throat radius of no more than about 10-32 meter. This is only slightly larger than the Planck length, 10-35 meter, the smallest distance that has definite meaning. We found that it is possible to have models of wormholes of macroscopic size but only at the price of confining the negative energy to an extremely thin band around the throat. For example, in one model a throat radius of 1 meter requires the negative energy to be a band no thicker than 10-21 meter, a millionth the size of a proton.

It is estimated that the negative energy required for this size of wormhole has a magnitude equivalent to the total energy generated by 10 billion stars in one year. The situation does not improve much for larger wormholes. For the same model, the maximum allowed thickness of the negative energy band is proportional to the cube root of the throat radius. Even if the throat radius is increased to a size of one light-year, the negative energy must still be confined to a region smaller than a proton radius, and the total amount required increases linearly with the throat size.

It seems that wormhole engineers face daunting problems. They must find a mechanism for confining large amounts of negative energy to extremely thin volumes. So-called cosmic strings, hypothesized in some cosmological theories, involve very large energy densities in long, narrow lines. But all known physically reasonable cosmic-string models have positive energy densities.

Warp drives are even more tightly constrained, as shown working with us. In Alcubierre’s model, a warp bubble traveling at 10 times lightspeed (warp factor 2, in the parlance of Star Trek: The Next Generation) must have a wall thickness of no more than 10-32 meter. A bubble large enough to enclose a starship 200 meters across would require a total amount of negative energy equal to 10 billion times the mass of the observable universe. Similar constraints apply to Krasnikov’s superluminal subway.

A modification of Alcubierre’s model was recently constructed by Chris Van Den Broeck of the Catholic University of Louvain in Belgium. It requires much less negative energy but places the starship in a curved space-time bottle whose neck is about 10-32 meter across, a difficult feat. These results would seem to make it rather unlikely that one could construct wormholes and warp drives using negative energy generated by quantum effects.

The quantum inequalities prevent violations of the second law. If one tries to use a pulse of negative energy to cool a hot object, it will be quickly followed by a larger pulse of positive energy, which reheats the object. A weak pulse of negative energy could remain separated from its positive counterpart for a longer time, but its effects would be indistinguishable from normal thermal fluctuations. Attempts to capture or split off negative energy from positive energy also appear to fail. One might intercept an energy beam, say, by using a box with a shutter. By closing the shutter, one might hope to trap a pulse of negative energy before the offsetting positive energy arrives. But the very act of closing the shutter creates an energy flux that cancels out the negative energy it was designed to trap.

A pulse of negative energy injected into a charged black hole might momentarily destroy the horizon, exposing the singularity within. But the pulse must be followed by a pulse of positive energy, which would convert the naked singularity back into a black hole – a scenario we have dubbed cosmic flashing. The best chance to observe cosmic flashing would be to maximize the time separation between the negative and positive energy, allowing the naked singularity to last as long as possible. But then the magnitude of the negative energy pulse would have to be very small, according to the quantum inequalities. The change in the mass of the black hole caused by the negative energy pulse will get washed out by the normal quantum fluctuations in the hole’s mass, which are a natural consequence of the uncertainty principle. The view of the naked singularity would thus be blurred, so a distant observer could not unambiguously verify that cosmic censorship had been violated.

Recently it was shown that the quantum inequalities lead to even stronger bounds on negative energy. The positive pulse that necessarily follows an initial negative pulse must do more than compensate for the negative pulse; it must overcompensate. The amount of overcompensation increases with the time interval between the pulses. Therefore, the negative and positive pulses can never be made to exactly cancel each other. The positive energy must always dominate–an effect known as quantum interest. If negative energy is thought of as an energy loan, the loan must be repaid with interest. The longer the loan period or the larger the loan amount, the greater is the interest. Furthermore, the larger the loan, the smaller is the maximum allowed loan period. Nature is a shrewd banker and always calls in its debts. The concept of negative energy touches on many areas of physics: gravitation, quantum theory, thermodynamics. The interweaving of so many different parts of physics illustrates the tight logical structure of the laws of nature. On the one hand, negative energy seems to be required to reconcile black holes with thermodynamics. On the other, quantum physics prevents unrestricted production of negative energy, which would violate the second law of thermodynamics. Whether these restrictions are also features of some deeper underlying theory, such as quantum gravity, remains to be seen. Nature no doubt has more surprises in store.

Key To Space Time Engineering: Huge Magnetic Field Created

Large magnetic fields above 150T were never produced before but this time scientists has successfully created the magnetic Field of 300T. Larger magnetic fields could be implemented into space time engineering, however this is not up to extent we need yet, provide a key to future space time engineering. Graphene, the extraordinary form of carbon that consists of a single layer of carbon atoms, has produced another in a long list of experimental surprises. In the current issue of the journal Science, a multi-institutional team of researchers headed by Michael Crommie, a faculty senior scientist in the Materials Sciences Division at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, reports the creation of pseudo-magnetic fields far stronger than the strongest magnetic fields ever sustained in a laboratory – just by putting the right kind of strain onto a patch of graphene.

“We have shown experimentally that when graphene is stretched to form nanobubbles on a platinum substrate, electrons behave as if they were subject to magnetic fields in excess of 300 tesla, even though no magnetic field has actually been applied,” says Crommie. “This is a completely new physical effect that has no counterpart in any other condensed matter system.”

Crommie notes that “for over 100 years people have been sticking materials into magnetic fields to see how the electrons behave, but it’s impossible to sustain tremendously strong magnetic fields in a laboratory setting.” The current record is 85 tesla for a field that lasts only thousandths of a second. When stronger fields are created, the magnets blow themselves apart.

The ability to make electrons behave as if they were in magnetic fields of 300 tesla or more – just by stretching graphene – offers a new window on a source of important applications and fundamental scientific discoveries going back over a century. This is made possible by graphene’s electronic behavior, which is unlike any other material’s.

[Image Details: In this scanning tunneling microscopy image of a graphene nanobubble, the hexagonal two-dimensional graphene crystal is seen distorted and stretched along three main axes. The strain creates pseudo-magnetic fields far stronger than any magnetic field ever produced in the laboratory. ]

A carbon atom has four valence electrons; in graphene (and in graphite, a stack of graphene layers), three electrons bond in a plane with their neighbors to form a strong hexagonal pattern, like chicken-wire. The fourth electron sticks up out of the plane and is free to hop from one atom to the next. The latter pi-bond electrons act as if they have no mass at all, like photons. They can move at almost one percent of the speed of light.

The idea that a deformation of graphene might lead to the appearance of a pseudo-magnetic field first arose even before graphene sheets had been isolated, in the context of carbon nanotubes (which are simply rolled-up graphene). In early 2010, theorist Francisco Guinea of the Institute of Materials Science of Madrid and his colleagues developed these ideas and predicted that if graphene could be stretched along its three main crystallographic directions, it would effectively act as though it were placed in a uniform magnetic field. This is because strain changes the bond lengths between atoms and affects the way electrons move between them. The pseudo-magnetic field would reveal itself through its effects on electron orbits.

In classical physics, electrons in a magnetic field travel in circles called cyclotron orbits. These were named following Ernest Lawrence’s invention of the cyclotron, because cyclotrons continuously accelerate charged particles (protons, in Lawrence’s case) in a curving path induced by a strong field.

Viewed quantum mechanically, however, cyclotron orbits become quantized and exhibit discrete energy levels. Called Landau levels, these correspond to energies where constructive interference occurs in an orbiting electron’s quantum wave function. The number of electrons occupying each Landau level depends on the strength of the field – the stronger the field, the more energy spacing between Landau levels, and the denser the electron states become at each level – which is a key feature of the predicted pseudo-magnetic fields in graphene.

A serendipitous discovery

In the patch of graphene inside the roughly circular indentation on a platinum substrate, four triangular nanobubbles appear at the edge of the patch and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows electron densities clustering at discrete Landau levels. Pseudo-magnetic fields are strongest at regions of greatest curvature.

[Image Details: A patch of graphene at the surface of a platinum substrate exhibits four triangular nanobubbles at its edges and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows local electron densities clustering in peaks at discrete Landau-level energies. Pseudo-magnetic fields are strongest at regions of greatest curvature.]

Describing their experimental discovery, Crommie says, “We had the benefit of a remarkable stroke of serendipity.”

Crommie’s research group had been using a scanning tunneling microscope to study graphene monolayers grown on a platinum substrate. A scanning tunneling microscope works by using a sharp needle probe that skims along the surface of a material to measure minute changes in electrical current, revealing the density of electron states at each point in the scan while building an image of the surface.

Crommie was meeting with a visiting theorist from Boston University, Antonio Castro Neto, about a completely different topic when a group member came into his office with the latest data. It showed nanobubbles, little pyramid-like protrusions, in a patch of graphene on the platinum surface and associated with the graphene nanobubbles there were distinct peaks in the density of electron states. Crommie says his visitor, Castro Neto, took one look and said, “That looks like the Landau levels predicted for strained graphene.”

Sure enough, close examination of the triangular bubbles revealed that their chicken-wire lattice had been stretched precisely along the three axes needed to induce the strain orientation that Guinea and his coworkers had predicted would give rise to pseudo-magnetic fields. The greater the curvature of the bubbles, the greater the strain, and the greater the strength of the pseudo-magnetic field. The increased density of electron states revealed by scanning tunneling spectroscopy corresponded to Landau levels, in some cases indicating giant pseudo-magnetic fields of 300 tesla or more.

“Getting the right strain resulted from a combination of factors,” Crommie says. “To grow graphene on the platinum we had exposed the platinum to ethylene” – a simple compound of carbon and hydrogen – “and at high temperature the carbon atoms formed a sheet of graphene whose orientation was determined by the platinum’s lattice structure.”

The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), the colors indicate height, not field strength, but the measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model. To get the highest resolution from the scanning tunneling microscope, the system was then cooled to a few degrees above absolute zero. Both the graphene and the platinum contracted – but the platinum shrank more, with the result that excess graphene pushed up into bubbles, measuring four to 10 nanometers (billionths of a meter) across and from a third to more than two nanometers high. To confirm that the experimental observations were consistent with theoretical predictions, Castro Neto worked with Guinea to model a nanobubble typical of those found by the Crommie group. The resulting theoretical picture was a near-match to what the experimenters had observed: a strain-induced pseudo-magnetic field some 200 to 400 tesla strong in the regions of greatest strain, for nanobubbles of the correct size.

[Image Details: The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), colors indicate height, not field strength, but measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model.]

“Controlling where electrons live and how they move is an essential feature of all electronic devices,” says Crommie. “New types of control allow us to create new devices, and so our demonstration of strain engineering in graphene provides an entirely new way for mechanically controlling electronic structure in graphene. The effect is so strong that we could do it at room temperature.”

The opportunities for basic science with strain engineering are also huge. For example, in strong pseudo-magnetic fields electrons orbit in tight circles that bump up against one another, potentially leading to novel electron-electron interactions. Says Crommie, “this is the kind of physics that physicists love to explore.”

“Strain-induced pseudo-magnetic fields greater than 300 tesla in graphene nanobubbles,” by Niv Levy, Sarah Burke, Kacey Meaker, Melissa Panlasigui, Alex Zettl, Francisco Guinea, Antonio Castro Neto, and Michael Crommie, appears in the July 30 issue of Science. The work was supported by the Department of Energy’s Office of Science and by the Office of Naval Research. I’ve contacted Crommy to provide more details of research. Hope, soon I’ll get a response from him,

[Source: News Center]

%d bloggers like this: