Video: Feasibility of Interstellar Travel and Nature of Existence

As we know that interstellar travel is more difficult than one can dream. Here is a video reflecting some new projects that could be hailed as having potential to sustain exploration on exoplanet.

The nature of Existence

Search for habitable planets and Alien life

Cheers!

Negative Energy And Interstellar Travel

Can a region of space contain less than nothing? Common sense would say no; the most one could do is remove all matter and radiation and be left with vacuum. But quantum physics has a proven ability to confound intuition, and this case is no exception. A region of space, it turns out, can contain less than nothing. Its energy per unit volume–the energy density–can be less than zero.

Needless to say, the implications are bizarre. According to Einstein’s theory of gravity, general relativity, the presence of matter and energy warps the geometric fabric of space and time. What we perceive as gravity is the space-time distortion produced by normal, positive energy or mass. But when negative energy or mass–so-called exotic matter–bends space-time, all sorts of amazing phenomena might become possible: traversable wormholes, which could act as tunnels to otherwise distant parts of the universe; warp drive, which would allow for faster-than-light travel; and time machines, which might permit journeys into the past. Negative energy could even be used to make perpetual-motion machines or to destroy black holes. A Star Trek episode could not ask for more.

For physicists, these ramifications set off alarm bells. The potential paradoxes of backward time travel–such as killing your grandfather before your father is conceived–have long been explored in science fiction, and the other consequences of exotic matter are also problematic. They raise a question of fundamental importance: Do the laws of physics that permit negative energy place any limits on its behavior?

We and others have discovered that nature imposes stringent constraints on the magnitude and duration of negative energy, which (unfortunately, some would say) appear to render the construction of wormholes and warp drives very unlikely.

Double Negative

Before proceeding further, we should draw the reader’s attention to what negative energy is not.

It should not be confused with antimatter, which has positive energy. When an electron and its antiparticle, a positron, collide, they annihilate. The end products are gamma rays, which carry positive energy. If antiparticles were composed of negative energy, such an interaction would result in a final energy of zero.

One should also not confuse negative energy with the energy associated with the cosmological constant, postulated in inflationary models of the universe. Such a constant represents negative pressure but positive energy.

The concept of negative energy is not pure fantasy; some of its effects have even been produced in the laboratory. They arise from Heisenberg’s uncertainty principle, which requires that the energy density of any electric, magnetic or other field fluctuate randomly. Even when the energy density is zero on average, as in a vacuum, it fluctuates. Thus, the quantum vacuum can never remain empty in the classical sense of the term; it is a roiling sea of “virtual” particles spontaneously popping in and out of existence [see “Exploiting Zero-Point Energy,” by Philip Yam; SCIENTIFIC AMERICAN, December 1997]. In quantum theory, the usual notion of zero energy corresponds to the vacuum with all these fluctuations.

So if one can somehow contrive to dampen the undulations, the vacuum will have less energy than it normally does–that is, less than zero energy.[See, Casimir Starcraft: Zero Point Energy]

  • Negative Energy

Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut. The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.

One problem with Alcubierre’s original model that the interior of the warp bubble is causally disconnected from its forward edge. A starship captain on the inside cannot steer the bubble or turn it on or off; some external agency must set it up ahead of time. To get around this problem, Krasnikov proposed a “superluminal subway,” a tube of modified space-time (not the same as a wormhole) connecting Earth and a distant star. Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube. On the return journey, they could travel through it at warp speed. Like warp bubbles, the subway involves negative energy.

Negative energy is so strange that one might think it must violate some law of physics.

Before and after the creation of equal amounts of negative and positive energy in previously empty space, the total energy is zero, so the law of conservation of energy is obeyed. But there are many phenomena that conserve energy yet never occur in the real world. A broken glass does not reassemble itself, and heat does not spontaneously flow from a colder to a hotter body. Such effects are forbidden by the second law of thermodynamics.

This general principle states that the degree of disorder of a system–its entropy–cannot decrease on its own without an input of energy. Thus, a refrigerator, which pumps heat from its cold interior to the warmer outside room, requires an external power source. Similarly, the second law also forbids the complete conversion of heat into work.

Negative energy potentially conflicts with the second law. Imagine an exotic laser, which creates a steady outgoing beam of negative energy. Conservation of energy requires that a byproduct be a steady stream of positive energy. One could direct the negative energy beam off to some distant corner of the universe, while employing the positive energy to perform useful work. This seemingly inexhaustible energy supply could be used to make a perpetual-motion machine and thereby violate the second law. If the beam were directed at a glass of water, it could cool the water while using the extracted positive energy to power a small motor–providing a refrigerator with no need for external power. These problems arise not from the existence of negative energy per se but from the unrestricted separation of negative and positive energy.

Unfettered negative energy would also have profound consequences for black holes. When a black hole forms by the collapse of a dying star, general relativity predicts the formation of a singularity, a region where the gravitational field becomes infinitely strong. At this point, general relativity–and indeed all known laws of physics–are unable to say what happens next. This inability is a profound failure of the current mathematical description of nature. So long as the singularity is hidden within an event horizon, however, the damage is limited. The description of nature everywhere outside of the horizon is unaffected. For this reason, Roger Penrose of Oxford proposed the cosmic censorship hypothesis: there can be no naked singularities, which are unshielded by event horizons.

For special types of charged or rotating black holes– known as extreme black holes–even a small increase in charge or spin, or a decrease in mass, could in principle destroy the horizon and convert the hole into a naked singularity. Attempts to charge up or spin up these black holes using ordinary matter seem to fail for a variety of reasons. One might instead envision producing a decrease in mass by shining a beam of negative energy down the hole, without altering its charge or spin, thus subverting cosmic censorship. One might create such a beam, for example, using a moving mirror. In principle, it would require only a tiny amount of negative energy to produce a dramatic change in the state of an extreme black hole.

[Image Details: Pulses of negative energy are permitted by quantum theory but only under three conditions. First, the longer the pulse lasts, the weaker it must be (a, b). Second, a pulse of positive energy must follow. The magnitude of the positive pulse must exceed that of the initial negative one. Third, the longer the time interval between the two pulses, the larger the positive one must be – an effect known as quantum interest (c).]

Therefore, this might be the scenario in which negative energy is the most likely to produce macroscopic effects.

Fortunately (or not, depending on your point of view), although quantum theory allows the existence of negative energy, it also appears to place strong restrictions – known as quantum inequalities – on its magnitude and duration.The inequalities bear some resemblance to the uncertainty principle. They say that a beam of negative energy cannot be arbitrarily intense for an arbitrarily long time. The permissible magnitude of the negative energy is inversely related to its temporal or spatial extent. An intense pulse of negative energy can last for a short time; a weak pulse can last longer. Furthermore, an initial negative energy pulse must be followed by a larger pulse of positive energy.The larger the magnitude of the negative energy, the nearer must be its positive energy counterpart. These restrictions are independent of the details of how the negative energy is produced. One can think of negative energy as an energy loan. Just as a debt is negative money that has to be repaid, negative energy is an energy deficit.

In the Casimir effect, the negative energy density between the plates can persist indefinitely, but large negative energy densities require a very small plate separation. The magnitude of the negative energy density is inversely proportional to the fourth power of the plate separation. Just as a pulse with a very negative energy density is limited in time, very negative Casimir energy density must be confined between closely spaced plates. According to the quantum inequalities, the energy density in the gap can be made more negative than the Casimir value, but only temporarily. In effect, the more one tries to depress the energy density below the Casimir value, the shorter the time over which this situation can be maintained.

When applied to wormholes and warp drives, the quantum inequalities typically imply that such structures must either be limited to submicroscopic sizes, or if they are macroscopic the negative energy must be confined to incredibly thin bands. In 1996 we showed that a submicroscopic wormhole would have a throat radius of no more than about 10-32 meter. This is only slightly larger than the Planck length, 10-35 meter, the smallest distance that has definite meaning. We found that it is possible to have models of wormholes of macroscopic size but only at the price of confining the negative energy to an extremely thin band around the throat. For example, in one model a throat radius of 1 meter requires the negative energy to be a band no thicker than 10-21 meter, a millionth the size of a proton.

It is estimated that the negative energy required for this size of wormhole has a magnitude equivalent to the total energy generated by 10 billion stars in one year. The situation does not improve much for larger wormholes. For the same model, the maximum allowed thickness of the negative energy band is proportional to the cube root of the throat radius. Even if the throat radius is increased to a size of one light-year, the negative energy must still be confined to a region smaller than a proton radius, and the total amount required increases linearly with the throat size.

It seems that wormhole engineers face daunting problems. They must find a mechanism for confining large amounts of negative energy to extremely thin volumes. So-called cosmic strings, hypothesized in some cosmological theories, involve very large energy densities in long, narrow lines. But all known physically reasonable cosmic-string models have positive energy densities.

Warp drives are even more tightly constrained, as shown working with us. In Alcubierre’s model, a warp bubble traveling at 10 times lightspeed (warp factor 2, in the parlance of Star Trek: The Next Generation) must have a wall thickness of no more than 10-32 meter. A bubble large enough to enclose a starship 200 meters across would require a total amount of negative energy equal to 10 billion times the mass of the observable universe. Similar constraints apply to Krasnikov’s superluminal subway.

A modification of Alcubierre’s model was recently constructed by Chris Van Den Broeck of the Catholic University of Louvain in Belgium. It requires much less negative energy but places the starship in a curved space-time bottle whose neck is about 10-32 meter across, a difficult feat. These results would seem to make it rather unlikely that one could construct wormholes and warp drives using negative energy generated by quantum effects.

The quantum inequalities prevent violations of the second law. If one tries to use a pulse of negative energy to cool a hot object, it will be quickly followed by a larger pulse of positive energy, which reheats the object. A weak pulse of negative energy could remain separated from its positive counterpart for a longer time, but its effects would be indistinguishable from normal thermal fluctuations. Attempts to capture or split off negative energy from positive energy also appear to fail. One might intercept an energy beam, say, by using a box with a shutter. By closing the shutter, one might hope to trap a pulse of negative energy before the offsetting positive energy arrives. But the very act of closing the shutter creates an energy flux that cancels out the negative energy it was designed to trap.

A pulse of negative energy injected into a charged black hole might momentarily destroy the horizon, exposing the singularity within. But the pulse must be followed by a pulse of positive energy, which would convert the naked singularity back into a black hole – a scenario we have dubbed cosmic flashing. The best chance to observe cosmic flashing would be to maximize the time separation between the negative and positive energy, allowing the naked singularity to last as long as possible. But then the magnitude of the negative energy pulse would have to be very small, according to the quantum inequalities. The change in the mass of the black hole caused by the negative energy pulse will get washed out by the normal quantum fluctuations in the hole’s mass, which are a natural consequence of the uncertainty principle. The view of the naked singularity would thus be blurred, so a distant observer could not unambiguously verify that cosmic censorship had been violated.

Recently it was shown that the quantum inequalities lead to even stronger bounds on negative energy. The positive pulse that necessarily follows an initial negative pulse must do more than compensate for the negative pulse; it must overcompensate. The amount of overcompensation increases with the time interval between the pulses. Therefore, the negative and positive pulses can never be made to exactly cancel each other. The positive energy must always dominate–an effect known as quantum interest. If negative energy is thought of as an energy loan, the loan must be repaid with interest. The longer the loan period or the larger the loan amount, the greater is the interest. Furthermore, the larger the loan, the smaller is the maximum allowed loan period. Nature is a shrewd banker and always calls in its debts. The concept of negative energy touches on many areas of physics: gravitation, quantum theory, thermodynamics. The interweaving of so many different parts of physics illustrates the tight logical structure of the laws of nature. On the one hand, negative energy seems to be required to reconcile black holes with thermodynamics. On the other, quantum physics prevents unrestricted production of negative energy, which would violate the second law of thermodynamics. Whether these restrictions are also features of some deeper underlying theory, such as quantum gravity, remains to be seen. Nature no doubt has more surprises in store.

How To Survive At The End Of The Cosmos?

The universe is out of control. Not only is it expanding but the expansion itself  is  accelerating. Most likely, such expansion can end only one way: in stillness and total darkness, with temperatures near absolute zero, conditions utterly inhospitable to life. That became evident in 1998, when astronomers at the Lawrence Berkeley National Laboratory and Australian National University were analyzing extremely distant, and thus ancient, Type Ia supernova explosions to measure their rate of motion away from us. (Type Ia supernovas are roughly the same throughout the universe, so they provide an ideal “standard candle” by which to measure the rate of expansion of the universe.)


Courtesy of the Canada-France-Hawaii Telescope/J.-C. Cuillandre/Coelum

Quote:
There’s no time like the present to start planning our cosmic egress. Scenes like this one, of the massive galaxy M87, will become fleeting memories as the universe advances in age. Thanks to dark energy, even the nearby galaxies will begin to recede from us faster than light, and no news of them will reach us. Eventually even the atoms will be too cold to move, and time itself will freeze—too late for any straggling civilization.

Physicists, scrambling to their blackboards, deduced that a “dark energy” of unknown origin must be acting as an antigravitational force, pushing galaxies apart. The more the universe expands, the more dark energy there is to make it expand even faster, ultimately leading to a runaway cosmos. Albert Einstein introduced the idea of dark energy mathematically in 1917 as he further developed his theory of general relativity. More evidence came last year, when data from the Wilkinson Microwave Anisotropy Probe, or WMAP, which analyzes the cosmic radiation left over from the Big Bang, found that dark energy makes up a full 73 percent of everything in the universe. Dark matter makes up 23 percent. The matter we are familiar with—the stuff of planets, stars, and gas clouds—makes up only about 4 percent of the universe.

As the increasing amount of dark energy pushes galaxies apart faster and faster, the universe will become increasingly dark, cold, and lonely. Temperatures will plunge as the remaining energy is spread across more space. The stars will exhaust their nuclear fuel, galaxies will cease to illuminate the heavens, and the universe will be littered with dead dwarf stars, decrepit neutron stars, and black holes. The most advanced civilizations will be reduced to huddling around the last flickering embers of energy—the faint Hawking radiation emitted by black holes. Insofar as intelligence involves the ability to process information, this, too, will fade. Machines, whether cells or hydroelectric dams, extract work from temperature and energy gradients. As cosmic temperatures approach the same ultralow point, those differentials will disappear, bringing all work, energy flow, and information—and the life that depends on them—to a frigid halt. So much for intelligence.

A cold, dark universe is billions, if not trillions, of years in the future. Between now and then, humans will face plenty of other calamities: wars and pestilences, ice ages, asteroid impacts, and the eventual consumption of Earth—in about 5 billion years—as our sun expands into a red giant star. To last until the very end of the universe, an advanced civilization will have to master interstellar travel, spreading far and wide throughout the galaxy and learning to cope with a slowing, cooling, darkening cosmos. Their greatest challenge will be figuring out how to not be here when the universe dies, essentially finding a way to undertake the ultimate journey of fleeing this universe for another.

Quote:
THINK SMALL

Stephen Hawking has suggested that it might be possible to travel through a wormhole to another universe or another time. This may allow an advanced civilization to evade the death of the universe. Even if the wormhole is subatomic it might still be possible to inject enough information through the wormhole via nanotechnology to re-create the entire civilization on the other side.

Such a plan may sound absurd. But there is nothing in physics that forbids such a venture. Einstein’s theory of general relativity allows for the existence of wormholes, sometimes called Einstein-Rosen bridges, that connect parallel universes. Among theoretical and experimental physicists, parallel universes are not science fiction. The notion of the multiverse—that our universe coexists with an infinite number of other universes—has gained ground among working scientists.

The inflationary theory proposed by Alan Guth of MIT, to explain how the universe behaved in the first few trillionths of a second after the Big Bang, has been shown to be consistent with recent data derived from WMAP. Inflation theory postulates that the universe expanded to its current size inconceivably fast at the very beginning of time, and it neatly explains several stubborn cosmological mysteries, including why the universe is both so geometrically flat and so uniform in its distribution of matter and energy. Andrei Linde of Stanford University has taken this idea a step further and proposed that the process of inflation may not have been a singular event—that “parent universes” may bud “baby universes” in a continuous, never-ending cycle. If Linde’s theory is correct, cosmic inflations occur all the time, and new universes are forming even as you read these words.

Naturally, the proposal to eventually flee this universe for another one raises practical questions. To begin with, where exactly would an advanced civilization go?

As it happens, physicists are spending billions of dollars on experiments to probe the nature of parallel universes. Since 1997, scientists at the University of Colorado at Boulder have conducted experiments to search for parallel universes perhaps no more than a millimeter away from ours. The experiments searched for tiny deviations in Newton’s inverse square law of gravity. The surface of a sphere in three dimensions is equal to 4π times the radius squared. Likewise, the surface of a sphere of higher dimensions is proportional to the radius cubed. According to Newton’s law, in such a sphere the measurable gravity should decrease as a factor of the distance cubed. So the Colorado physicists set about measuring the gravity within a small, defined space. If the gravitational force deviated significantly from Newton’s equation (the distance squared) and was more closely proportional to the distance cubed, the research team theorized, that would suggest the presence of a hidden dimension.

Newton’s inverse square law has been tested with exquisite precision by space probes, but it had never been tested at the millimeter level. So far, the results from these experiments have been negative, but other scientists are looking for even smaller deviations. A group at Purdue University has proposed testing Newton’s inverse square law down to the atomic level using nanotechnology.

Physicists elsewhere are exploring other possibilities. The Large Hadron Collider, the world’s largest atom smasher, has been turned on outside Geneva, Switzerland. This huge machine, more than five miles in diameter, is capable of blasting protons together with a colossal energy of 14 trillion electron volts(currently at 7.0 Tev); it will be able to probe distances 1/10,000 the size of a proton, perhaps creating a zoo of exotic particles not seen since the Big Bang. One hope is that it will create exotic particles like miniature black holes and sparticles, or supersymmetric particles, which would indicate the presence of parallel universes in higher dimensions.

In addition, the space-based gravity-wave detector LISA (Laser Interferometer Space Antenna) will be launched sometime around 2012. It will consist of three satellites trailing Earth’s orbit around the sun and communicating with one another via laser beams, thereby creating a triangle with sides more than 3 million miles long. LISA is designed to detect faint gravity waves from extremely far away—gravitational shock waves that were emitted less than a trillionth of a second after the instant of creation. The instrument is so sensitive that scientists hope it will be able to test many of the theories that seek to explain what happened before the Big Bang and probe for the existence of universes beyond our own.

To journey safely from this universe to another—to investigate the various options and do some trial runs—an advanced civilization will need to be able to harness energy on a scale that dwarfs anything imaginable by today’s standards.

To grasp the challenge, consider a schema introduced  by Russian astrophysicist Nikolai Kardashev that classified civilizations according to their energy consumption. According to his definition, a Type I civilization is planetary: It is able to exploit all the energy falling on its planet from the sun (10^16 watts). This civilization could derive limitless hydrogen from the oceans, perhaps harness the power of volcanoes, and maybe even control the weather. A Type II civilization could control the energy output of the sun itself: 1026 watts, or 10 billion times the power of a Type I civilization. Deriving energy from solar flares and antimatter, Type IIs would be effectively immune to ice ages, meteors, even supernovas. A Type III civilization would be 10 billion times more powerful still, capable of controlling and consuming the output of an entire galaxy (10^36 watts). Type IIIs would derive energy by extracting it from billions of stars and black holes. A Type III civilization would be able to manipulate the Planck energy (10^19 billion electron volts), the energy at which space-time becomes foamy and unstable, frothing with tiny wormholes and bubble-size universes. The aliens in Independence Day would qualify as a Type III civilization.

By contrast, ours would qualify as a Type 0 civilization, deriving its energy from dead plants—oil and coal. But we could evolve rapidly. A civilization like ours growing at a modest 1 to 2 percent per year could make the leap to a Type I civilization in a century or so, to a Type II in a few thousand years, and to a Type III in a hundred thousand to a million years. In that time frame, a Type III civilization could colonize the entire galaxy, even if their rockets traveled at less than the speed of light. With the inevitable Big Freeze at least tens of billions of years away, a Type III civilization would have plenty of time to develop and test an escape plan.

Why not start now? On the following pages are experiments and plans to guide a civilization looking for a way out—a survival guide to the end of the cosmos.

SEVEN STEPS TO LEAVING THE COSMOS

1- FIND AND TEST A THEORY OF EVERYTHING

Before an advanced civilization leaps into the unknown, it will need to study the pathways that make it possible to break through to the other side. Toward that end, scientists will need to discover the laws of quantum gravity, which will help to calculate the stability of wormholes connecting our universe to others.

At present, the leading—and, some believe, only—candidate for a theory of everything is string theory, or M-theory. This theory states that all subatomic particles are different vibrations or notes on a tiny string or membrane. These aren’t ordinary strings but rather strings that vibrate in higher-dimensional hyperspace. In principle, our universe might be a huge membrane drifting in 11 dimensions, which may occasionally collide with a neighboring membrane or universe. It is possible that our universe and a neighboring one hover only a millimeter or less from each other, like two parallel sheets of paper. To bridge even this tiny distance, however, we’ll need machinery of vast power.

2- SEARCH FOR A NATURALLY OCCURRING WORMHOLE

Next, in order to escape from this universe into another one, we will need to find a suitable exit: some wormhole, dimensional gateway, or cosmic tunnel that connects here to there.

There are many possibilities, some of which may occur naturally. The Big Bang, which released a tremendous amount of energy, may have left behind all manner of exotic entities of physics, such as cosmic strings, false vacuums, or negative matter or energy. The original expansion of the universe may have been so rapid and explosive that even tiny wormholes might have stretched and blown up to macroscopic size. The discovery of such entities would greatly aid any effort to leave a dying universe; if they exist, we would do well to find them. Perhaps by the time the need arises, billions of years from now, an advanced civilization will have stumbled upon one of these gateways. In the meantime, we should consider a more proactive strategy.

Quote:
UPGRADE THE COMPUTER

Einstein’s equations allow for the existence of stacked, parallel universes. But to calculate precisely what’s on the other side of a wormhole will require gigantic amounts of computer power, beyond anything available today.

3- SEND A PROBE THROUGH A BLACK HOLE

Black holes offer another possible avenue of escape. One advantage of black holes is that, as scientists now realize, they are plentiful in the universe. The one at the center of our galaxy has a mass more than 3 million times that of our sun. Of course, there are numerous technical problems to be worked out. Most physicists believe that a trip through a black hole would be fatal. Although Einstein’s equations permit the possibility of passing through a black hole, the quantum effects may be insurmountable. However, our understanding of black hole physics is in its infancy, and this conjecture has never been tested.

Quote:
SAVE EARLY AND OFTEN

Before the probe falls into the black hole, it must radio its data to observers waiting nearby. Here a problem arises. To the observer, the probe seems to slow down as it nears the event horizon and eventually stops entirely. So the probe must send the last of its data early on; otherwise the radio signals may be redshifted beyond recognition.

A reasonable first experiment would be to send a test probe through a black hole. Of course, any such venture would be a one-way trip; every black hole is surrounded by an event horizon, a point of no return beyond which not even light (and perhaps information) can escape the immense gravitational pull. Knowledge could be gleaned from the probe up to the moment it finally crosses the event horizon and all contact is lost. An intense, and most likely lethal, radiation field surrounds the event horizon. (Light rays gain tremendous energy as they fall into a black hole.) A probe could determine precisely how much radiation permeates this region—useful data for subsequent missions.

A probe might also settle some critical questions about the stability of black holes.  Roy Kerr showed that a rapidly spinning black hole will collapse not into a dot but rather a rotating ring that cannot break down because of centrifugal forces. A Kerr ring has the same topology as Alice’s looking glass; the wormhole at its center might connect our universe to other points in the same universe or to an infinite number of parallel universes. These parallel universes may be stacked on top of one another like floors in an elevator skyscraper. Scientists disagree over what happens if one enters a Kerr ring. For example, some say that sending a probe in might destabilize the black hole, reduce the event horizon to a singularity, and shut the wormhole altogether. This controversy gained fuel in July when Stephen Hawking, reversing a famous wager he’d made seven years ago, suggested that information entering a black hole may not be irretrievably lost after all. Throwing a probe into a black hole would disturb the Hawking radiation it emits, he argues, and might permit information to leak out. All the more reason to send a probe in and see what happens.

4- CREATE A BLACK HOLE IN SLOW MOTION

Once the characteristics near the event horizon of a black hole are carefully ascertained by probes, the next step might be to create a black hole in slow motion to gain further experimental data on the characteristics of space-time.

In a 1939 paper, Einstein envisioned a swirling mass of stellar debris slowly collapsing under its own gravity. He concluded that such a mass alone could not contract on a large enough scale to form a black hole, but he had not considered the now-familiar concept that the object could implode. His work leaves open the possibility that if one could slowly inject sufficient additional matter and energy into the spinning system, one could kick-start an implosion and create a black hole.

Quote:
STIR GENTLY

The contraction of the neutron stars should be performed slowly, lest the scientist set off a messy, supernova-like explosion. Conducted properly, the process should create two Kerr rings, one in this universe and one in another.

Consider that a Type III civilization would be capable of corralling matter on a galactic scale. To form a black hole, one might gather a swirling collection of neutron stars, which are each about the size of Manhattan but possess more mass than our sun. Gravity will gradually bring the stars closer together, at which point our advanced scientists might carefully add more neutron stars to the mix. Once the total matter exceeds about three solar masses, the combined gravity would force the stars to collapse into a spinning ring—a Kerr black hole. Armed with a newfound ability to create and study wormholes under controlled circumstances, future scientists would greatly advance their knowledge of how wormholes form—and how best to traverse them.

5- CREATE NEGATIVE ENERGY

If Kerr rings prove to be lethal or too unstable for use as cosmic portals, an advanced civilization might instead contemplate opening up a new wormhole by using negative matter or negative energy. (In principle, negative matter or energy should weigh less than nothing and fall up rather than down. This is different stuff from antimatter, which contains positive energy and falls down.) In 1988 Kip Thorne and his colleagues at Caltech showed that with sufficient negative matter or negative energy, one could create a wormhole through which a traveler could freely pass back and forth between, say, his laboratory and a distant point in space or time.

Although no one has yet seen negative matter or negative energy in the wild, it has been detected in the laboratory, in the form of something called the Casimir effect. Consider two uncharged, parallel plates. Theoretically, the force between them should be zero. But if they are placed only a few atoms apart, then the space between them is not enough for some quantum fluctuations to occur. As a result, the number of quantum fluctuations in the region around the plates is greater than in the space between. This differential creates a net force that pushes the two plates together. Hendrik Casimir predicted the effect in 1948; it has since been confirmed experimentally.

The amount of energy involved is minuscule. To employ the Casimir effect to practical ends, one would have to use advanced technology to place the parallel plates at a fantastically small distance apart—10–33 centimeter, the Planck length (the smallest measurement of length with any meaning). Now suppose that these two parallel plates could be shaped into a single sphere, with the plates forming a sort of double lining, and pressed together to within this fractional distance. The resulting Casimir effect might generate enough negative energy to open a wormhole within the sphere.

6- MAKE A BABY UNIVERSE

If both Kerr rings and negative-energy wormholes prove unreliable, Guth’s inflation theory points the way to another, more difficult escape strategy: creating a baby universe.


As Guth points out, to create something resembling our universe would require “1089 photons, 1089 electrons, 1089 positrons, 1089 neutrinos, 1089 antineutrinos, 1079 protons, and 1079 neutrons.” However, Guth notes, the positive energy of this matter is almost but not entirely balanced out by the negative energy of gravity. (If our universe were closed, which it isn’t, the two values would cancel each other out exactly.) In other words, the net total matter required to create a baby universe might equal only a few ounces.

But what ounces! In principle, baby universes are born when a certain region of space-time becomes unstable and enters a state called the false vacuum. The false vacuum needed to create our universe is extraordinarily small, on the order of 10–26 centimeter wide. If one created this false vacuum from one ounce of matter, its density would be a phenomenal 1080 grams per cubic centimeter. Acquiring a few ounces of matter is easy; compressing it into the small volume necessary is not possible today.

The solution requires that a fantastic amount of energy, roughly equal to the Planck energy, be concentrated on a tiny region. Here are two approaches an advanced civilization might try.

  • BUILD A LASER IMPLOSION MACHINE

The power of laser beams is essentially unlimited, constrained mainly by the stability of lasing material and the energy of the power source. Lasers that can produce a brief terawatt, or trillion-watt, burst are commonplace, and petawatt lasers capable of generating a quadrillion watts are possible. By contrast, a large nuclear power plant produces only a billion watts of continuous power. It is theoretically possible for an X-ray laser to focus the output of a nuclear bomb to create a pulse of unimaginable power.

At the Lawrence Livermore National Laboratory, scientists have used a laser to fire a series of high-energy pulses radially onto a single pellet made of deuterium and tritium, the basic ingredients of a hydrogen bomb, thus creating the conditions for thermonuclear fusion. An advanced civilization could create a similar device on a much larger scale. By placing huge laser stations on asteroids and then firing millions of laser pulses onto a single point, future scientists could generate temperatures and pressures that swamp today’s technology. Each laser could be powered by a nuclear bomb; however, such a device would be usable only once.

The aim of firing this massive bank of laser beams would be to either heat a chamber sufficiently high—about 1029 degrees Kelvin—to create a false vacuum inside or compress a pair of spherical plates to within the Planck distance of each other, creating negative energy via the Casimir effect. One way or the other, a wormhole connecting our universe to another one should open within the chamber, allowing us to exit.

Quote:
WATCH THE CLOCK

Precision timing is critical in this step. All the lasers should be arranged to converge on the same point simultaneously in order to create a uniform distribution of energy. However, because the lasers will be widely separated in space, they are also widely separated in time. The scientist need only ensure that all the beams converge in the same place at the same moment, not that they fire all at once.

  • BUILD A COSMIC ATOM SMASHER

One of the most powerful energy-generating devices currently available to scientists is the Large Hadron Collider,which will be able to generate 14 trillion electron volts. Even that is one-quadrillionth the energy necessary to create a false vacuum.

But a particle accelerator with the diameter of our solar system might do the trick. Gigantic coil magnets could be placed at strategic intervals on asteroids to bend and focus a particle beam in a circular path around the sun. (Since the vacuum of empty space is better than any vacuum attainable on Earth, the beam of subatomic particles would not need light-years of tubing to contain it; it could be fired into empty space.) Fair warning: The magnetic field required by each coil to bend the beam would be so huge that the surge of power through it might melt the coil, making it usable only once. After the beam has passed, the melted coils would have to be discarded and replaced in time for the next pass.

Alternatively, it is worth noting that the Large Hadron Collider may be the last generation of giant particle accelerators to use radio-frequency energies to boost subatomic particles around a giant ring. Physicists are already attempting to build tabletop-size laser-driven accelerators that, in principle, could attain billions of electron volts. So far, scientists have used powerful laser beams to attain an acceleration of 200 billion electron volts per meter, a new record. Progress is rapid, with the energy growing by a factor of 10 every five years. Although technical problems hamper the development of a true tabletop accelerator, an advanced civilization has billions of years to perfect these and other devices.

In the interim, to reach the Planck energy with something like current laser technology would require an atom smasher 10 light-years long, reaching beyond the nearest star. Power stations would need to be placed along the path in order to pump laser energy into the beam and to focus it—a minor task for a Type III civilization.

7- SEND IN THE NANOBOTS

Assume now that the wormholes created in the previous steps prove unworkable. Perhaps they are unstable, or too small to pass through, or their radiation effects are too intense. What if future scientists find that only atom-size particles can safely pass through a wormhole? If that is the case, intelligent life may have but one remaining option: Send a nanobot through the wormhole to regenerate human civilization on the other side.

Quote:
IF ALL ELSE FAILS

If an actual nanobot cannot squeeze through a tiny wormhole, future scientists might still be able to thread enough information through the wormhole to construct a nanobot on the other side.

This process occurs all the time in nature. An oak tree produces and scatters seeds that are compact, resilient, packed with all the genetic information necessary to re-create a tree, and loaded with sufficient nourishment to make colonization possible. Using nanotechnology, an advanced civilization might well be able to encode vast quantities of information into a tiny, self-replicating machine and send this machine through a dimensional gateway. Atom-size, it would be able to travel near the speed of light and land on a distant moon that is stable and full of valuable minerals. Once situated, it would use the raw materials at hand to create a chemical factory capable of making millions of copies of itself. These new robots would then rocket off to other distant moons, establish new factories, and create still more copies. Soon, a sphere of trillions of robot probes would be expanding near the speed of light and colonizing the entire galaxy.

Next, the robot probes would create huge biotechnology laboratories. They would inject their precious cargo of information—the preloaded DNA sequences of the civilization’s original inhabitants—into incubators and thereby clone the entire species. If future scientists manage to encode the personalities and memories of its inhabitants into these nanobots, the civilization could be reincarnated.

Mathematically, this is the most efficient way for a Type III civilization to colonize a galaxy, not to mention a new cosmos. If we ever encounter another intelligent life-form, chances are it won’t be in a flying saucer like the starship Enterprise. More likely, we’ll make contact with a robot probe they’ve left on a moon somewhere. This was the basis of Arthur C. Clarke’s 2001: A Space Odyssey, which may be the most scientifically accurate depiction of an encounter with an extraterrestrial intelligence. In the film version, this logic was originally articulated by scientists in the film’s opening minutes, but director Stanley Kubrick cut the interviews from the final edit.

Quote:
STRANGE BUT TRUE

Although seemingly fantastic, these scenarios are consistent with the known laws of physics and biology and would be within the capabilities of a Type III civilization. For a civilization caught in the last days of an expanding universe, these may be the only options for escape.

Kerr Black Holes

The Schwarzschild reference frame is static outside the Black Hole, so even though space is curved, and time is slowed down close to the Black Hole, it is much like the absolute space of Newton. But we will need a generalized reference frame in the case of rotating Black Holes. Roy Kerr generalized the Schwarzschild geometry to include rotating stars, and especially rotating Black Holes. Most stars are rotating, so it is natural to expect newly formed Black Holes to process significant rotation too.

Features of a Kerr Black Hole

Image

The Kerr black hole consists of a rotating mass at the center, surrounded by two event horizons. The outer event horizon marks the boundary within which an observer cannot resist being dragged around the black hole with space-time. The inner event horizon marks the boundary from within which an observer cannot escape. The volume between the event horizons is known as the ergosphere.

What is a Kerr black hole?

The usual idealised “static” black hole is stationary, unaccelerated, at an arbitarily-large distance from the observer, is perfectly spherical, and has a point-singularity at its centre.

When one of these idealised black holes rotates, it gets an extra property. It’s no longer spherically symmetrical , the receding and approaching edges have different pulling strengths and spectral shifts, and the central singularity is no longer supposed to be a dimensionless point.

Image

The equatorial bulge in the event horizon can be deduced in several ways

  • … as a sort of centrifugal forces effect. Since it’s possible to model the (distantly-observed) hole as having all its mass existing as an infinitely-thin film at the event horizon itself (i.e. where the mass is “seen” to be), you’d expect this virtual film to have a conventional-looking equatorial bulge, through centrifugal forces.
  • … as a sort of mass-dilation effect. Viewed from the background frame, the “moving film” of matter ought to appear mass-dilated, and therefore ought to have a greater gravitational effect, producing an increase in the extent of the event horizon. Since the background universe sees the bh equator to be moving faster than the region near the bh poles, the equator should appear more mass-dilated, and should have a horizon that extends further.
  • … as a shift effect. This tidy ellipsoidal shape isn’t necessarily what people actually see – it’s an idealised shape that’s designed to illustrate an aspect of the hole’s deduced geometry independent of the observer’s viewing angle. In fact, the receding and approaching sides of the hole (viewed from the equator) might appear to have different radii, because it’s easier for light to reach the observer from the approaching (blueshifted) side than the receding (redshifted) side (these shifts are superimposed on top of the normal Schwarzchild redshift).
    If we calculate these motion shifts using either the SR shift assumptions f’/f = (flat spacetime propagation shift) × root[1 – v²/c²] or the plain fixed-emitter shift law f’/f = (c-v)/c, and then treat them as “gravitational”, then by multiplying the two opposing shifts together and rooting the result, we can get the same averaged dilation factor of f’/f=root(1 – v²/c²) in each case, and by applying the averaged value, we recreate the same sort of equatorially-dilated shape that we got in the other two arguments.

Of course, none of these “film” arguments work for a rotating point, which immediately tells us that the distribution of matter within a rotating black hole is important, and that the usual method of treating the actual extent of a body within the horizon as irrelevant (allowing the use of a point-singularity) no longer works when the hole is rotating (a rotating hole can’t be said to contain a point-singularity).
In the case of a rotating hole, the simplest state that we can claim is equivalent to the rotating film of matter for a distant observer is a ring-singularity.

Notes

  • The idea of being able to treat a non-rotating black hole as either a point-singularity or a hollow infinitely-thin film is a consequence of the result that the actual mass-distribution is a “null” property for a black hole, as long as it is spherically symmetrical. If the mass fits into a Schwarzchild sphere, the usual static model of a black hole allows the hole’s mass to be point-sized, golfball-sized, or of any size up to the size of the event horizon.
    It’s usual to treat all the matter as being compacted to a dimensionless point, but sometimes it’s useful to go to the other extreme and treat the matter as being at its “observed” position – as an infinitely-thin film at the event horizon (see Thorne’s membrane paradigm).
  • The idea of being able to treat all shifts as being propagation effects is something that probably ought to be part of GR – in the context of black holes, the time-dilation effect comes out as a curved-space propagation effect due the enhanced gravitation due to kinetic energy. However, there’s a slight “political” problem here, in that GR is supposed to reduce to SR, and SR is usually interpreted as having Lorentz shifts which are supposed to be non-gravitational (because allowing the possibility of gravitational effects upsets the usual SR derivations). A GR-centred physicist might not have a problem with this approach of treating all shift effects as being equivalent, a SR-centred one probably would.
  • The “bulginess” of a Kerr black hole is illustrated on p.293 of the Thorne book (fig 7.9). Thorne says that the effect of the spin on the horizon shape was discovered Larry Smarr in 1973.

Overview of Kerr Spacetime

Kerr spacetime is the unique explicitly defined model of the gravitational field of a rotating star. The spacetime is fully revealed only when the star collapses, leaving a black hole — otherwise the bulk of the star blocks exploration. The qualitative character of Kerr spacetime depends on its mass and its rate of rotation, the most interesting case being when the rotation is slow. (If the rotation stops completely, Kerr spacetime reduces to Schwarzschild spacetime.)

The existence of black holes in our universe is generally accepted — by now it would be hard for astronomers to run the universe without them. Everyone knows that no light can escape from a black hole, but convincing evidence for their existence is provided their effect on their visible neighbors, as when an observable star behaves like one of a binary pair but no companion is visible.

Suppose that, travelling our spacecraft, we approach an isolated, slowly rotating black hole. It can then be observed as a black disk against the stars of the background sky. Explorers familiar with the Schwarzschild black holes will refuse to cross its boundary horizon. First of all, return trips through a horizon are never possible, and in the Schwarzschild case, there is a more immediate objection: after the passage, any material object will, in a fraction of a second, be devoured by a singularity in spacetime.

If we dare to penetrate the horizon of this Kerr black hole we will find … another horizon. Behind this, the singularity in spacetime now appears, not as a central focus, but as a ring — a circle of infinite gravitational forces. Fortunately, this ring singularity is not quite as dangerous as the Schwarzschild one — it is possible to avoid it and enter a new region of spacetime, by passing through either of two “throats” bounded by the ring (see The Big Picture).

Image

In the new region, escape from the ring singularity is easy because the gravitational effect of the black hole is reversed — it now repels rather than attracts. As distance increases, this negative gravity weakens, just as on the positive side, until its effect becomes negligible.

A quick departure may be prudent, but will prevent discovery of something strange: the ring singularity is the outer equator of a spatial solid torus that is, quite simply, a time machine. Travelling within it, one can reach arbitrarily far back into the past of any entity inside the double horizons. In principle you can arrange a bridge game, with all four players being you yourself, at different ages. But there is no way to meet Julius Caesar or your (predeparture) childhood self since these lie on the other side of two impassable horizons.

This rough description is reasonably accurate within its limits, but its apparent completeness is deceptive. Kerr spacetime is vaster — and more symmetrical. Outside the horizons, it turns out that the model described above lacks a distant past, and, on the negative gravity side, a distant future. Harder to imagine are the deficiencies of the spacetime region between the two horizons. This region definitely does not resemble the Newtonian 3-spacebetween two bounding spheres, furnished with a clock to tell time. In it, space and time are turbulently mixed. Pebbles dropped experimentally there can simply vanish in finite time — and new objects can magically appear.


Kerr-Newman Black Hole

A rotating charged black hole. An exact, unique, and complete solution to the Einstein field equations in the exterior of such a black hole was found by Newman et al. (1965), although its connection to black holes was not realized until later (Shapiro and Teukolsky 1983, p. 338).

Rotating (Kerr) Black Holes, Charged and Uncharged
Most stars spin on an axis. In 1963, Roy Kerr reasoned that when rotating stars shrink, they would continue to rotate. Kip Thorne calculated that most black holes would rotate at a speed 99.8% of their mass. Unlike the static black holes, rotating black holes are oblate and spheroidal. The lines of constant distance here are ellipses, and lines of constant angle are hyperbolas.

Image

Unlike static black holes, rotating black holes have two photon spheres. In a sense, this results in a more stable orbit of photons. The collapsing star “drags” the space around it into rotating with it, kind of like a whirlpool drags the water around it into rotating. As in the diagram above, there would be two different distances for photons. The outer sphere would be composed of photons orbiting in the opposite direction as the black hole. Photons in this sphere travel slower than the photons in the inner sphere. In a sense, since they are orbiting in the opposite direction, they have to deal with more resistance, hence they are “slowed down”. Similarly, photons in the inner ring travel faster since they are not going against the flow. It is because the photon sphere in agreement with the rotation can travel “faster” that it is on the inside. The closer one gets to the event horizon, the faster one has to travel to avoid falling into the singularity – hence the “slower” moving photons travel on the outer sphere to lessen the gravitational hold the black hole has.

The rotating black hole has an axis of rotation. This, however, is not spherically symmetric. The structure depends on the angle at which one approaches the black hole. If one approaches from the equator, then one would see the cross-section as in the diagram above, with two photon spheres. However, if one approached at angles to the equator, then one would only see a single photon sphere.

The position of the photon spheres also depend on the speed at which the black hole rotates. The faster the black hole rotates, the further apart the two photon spheres would be. For that matter, a black hole with a speed equal to its mass would have the greatest possible distance between the two photon spheres. This is because of greater difference in the speed between the photon spheres. As the speed of rotation increases, the outer sphere of photons would slow down as it meets greater resistance, even as the inner sphere would travel “faster” as it is pushed along by the centripetal forces.

Next, we move on to look at the ergosphere. The ergosphere is unique to the rotating black hole. Unlike the event horizon, the ergosphere is a region, and not a mathematical distance. It is a solid ellipsoid (or a 3-dimensional ellipse). The ergosphere billows out from the black hole above the outer event horizon of a charged black hole (a.k.a. Kerr-Newman), and above the event horizon of an uncharged black hole (a.k.a. Kerr). This distance is known as the static limit of a rotating black hole. At this distance, it is no longer possible to stay still even if one travels at the speed of light. One would inevitably be drawn towards the singularity. The faster the rotation, the further out it billows. When the ergosphere’s radius is half the Schwarzschild radius along the axis of rotation, it experiences the greatest distance it can billow out. At this point, even light rays are dragged along in the direction of rotation. Strangely enough, it is postulated that one can enter and leave as one likes since technically, you have not hit the event horizon yet.

For a rotating black hole, the outer event horizon switches time and space as we know it. The inner event horizon, in turn, returns it to the way we know it. Singularity then becomes a place rather than a time, and can technically be avoided. When angular velocity increases, both the outer and the inner event horizon move closer together.

In the diagram, you would have noticed that the singularity here is drawn as a ring, and not a point, as it was for the static black hole. In the case of a rotating black hole, the gravity around the ringed singularity is repulsive. In other words, it actually pushes one away, allowing you to actually leave the black hole. The only way to approach the ring singularity would be to come in from the equatorial plane. Other trajectories would be repelled with greater strength, proportional to the closer the angle is to the axis of rotation.

In addition, there would be a third photon sphere about the ring singularity. If light is parallel to the axis of rotation, the gravity and the anti-gravity of the singularity are balanced out. Light then traces out the path of constant distance (which, in the case is an ellipsoid). Technically, this might lead the light into another universe through the singularity, and then back out again. At this point within the black hole, we may see three types of light: the light reflected from our universe behind us; the light from other universes; and the light from the singularity.

Where Is The Missing Dark Matter?

It’s a very exciting time to be a cosmologist. There are great big fundamental questions that are unanswered yet: What is the universe made of? What bigger question could you ask? What’s even more exciting is the realization that it’s within our grasp to answer these questions. We’ve narrowed down the possibilities and we’ve got all the machinery in place to find the answer. And that answer is likely to be one that is unrivaled in the physical sciences in terms of beauty because of the prospect of being able to understand the formation of galaxies and the movements of cosmic structures—essentially the behavior of the cosmos as a whole—in terms of the properties of subatomic particles. We’ll be able to explain the universe as the result of the properties of its most basic constituents.

       Over the last 15 years or so, computer simulations have become the primary tool that theoreticians have at their disposal to understand the formation of galaxies and of structure in the universe. The aim of the simulations is to take what we call initial conditions, which is an early state of the universe, and then see how that primeval, amorphous state evolves into an approximation of the universe we can compare with current observational surveys. Through these simulations we can arrive at an understanding of what the universe is made of, how it is structured, and how it came to be.

 Computer-simulated universes are a very powerful tool because they allow you to produce material evidence for what various assumptions about the universe translate into, and then you can take this material evidence and compare it against reality. Because the universe is so complex, most mathematical treatments require many approximations and simplifications, so they are of limited applicability. Yet with a computer simulation you don’t need to make any of those approximations.  You solve the equations in the full generality, so it’s a very appealing activity for theoreticians to do.

       In the classic Einsteinian view of the universe, everything is smooth at the beginning and stays smooth forever. That clearly is not what our universe is doing because today our universe is very inhomogeneous—it is broken up into islands that we call galaxies and galaxy clusters. If the universe had been entirely smooth, we wouldn’t be here to talk about it.

       Instead, there must have been a small departure initially from this simplest assumption of a perfectly uniform universe. So the universe was not perfectly homogeneous either when it began or shortly after it began but, rather, it was slightly inhomogeneous. It had small regions where the density of matter was slightly higher than average and other regions where it was slightly lower than average. They were really tiny, these inhomogeneities, so tiny that for practical purposes it is hardly much of a departure from the simplest version of the theory. Yet tiny as they are to begin with, these inhomogeneities are very important because they are the seeds from which star clusters, galaxies and, eventually, human beings, will grow.

In April 1992, there was a very important  discovery in cosmology that made the headline news all over the world—the discovery of ripples in the structure of the microwave background radiation. These ripples are nothing other than these little inhomogeneities we are talking about.The COBE satellite that discovered these ripples was short-sighted—it had a very blurry vision of the early universe. The ripples that COBE saw were much larger than the scales of the initial galaxies, so we haven’t yet detected directly the progenitors of the galaxies in the large-scale structure in the microwave background, but we have discovered or we’ve directly imaged very closely related entities that correspond to larger structures today.  

   

 What goes into the computer simulation is the nature of the lumps that we’ve studied using the COBE satellite. And then the simulation follows the dynamic evolution of those small inhomegeneities as the universe expands and as it cools, taking these very tiny little lumps and making them grow bigger. As the process unfolds the lumps move around fairly quickly, and, as they do, some of them bump into each other and coalesce, and the computer follows these coalescences beautifully. Eventually one sees the mock universe grow from an almost, but not quite, homogeneous initial state to one which is really complex, irregular in structure and corresponding to the universe we see at the present day.

In the real universe, the whole evolutionary process is driven by gravity and gravity is produced by mass, so in order to create a simulated universe, we need to know what sort of mass our universe has. One of the critical discoveries of astronomers in the last 25 or 30 years is the realization that there must be more mass in the universe than is accounted for by what we can see.

That means most of the mass in the universe is made up of what we call dark matter, which simply describes matter that doesn’t shine. To perform a successful computer simulation one needs to specify: what is the dark matter? What is it made of and how much is there?

   The amazing thing is that if you make different assumptions you end up with different universes. So what many of us have been working on for the last 20 years is exploring various possibilities, evolving them in the computer to the present, and picking out those that look more like the real universe than others. Each mock universe that’s made up in the computer can be compared with the real universe in a variety of ways. You could look at different properties of the real universe and you could ask, “How many lumps are there?” or “How big are the lumps?” or “How are the lumps distributed?” You then can ask corresponding questions in the real universe and compare the two.

There are various candidates for the dark matter, but today one of the most popular is a very exotic elementary particle we call a WIMP, or weakly interacting massive particle.The WIMPs are just elementary, subatomic particles—fundamental constituents of matter. Tiny little individual things, they themselves come basically in two types: the so-called hot dark matter and cold dark matter. Hot dark matter consists of quickly moving small particles such as neutrinos, a particle which may or may not have a mass and therefore may or may not contribute to the shape of the universe. Cold dark matter is made up of particles that are sluggish—they move more slowly and are therefore cold. Predicted in a certain class of theories of fundamental interactions called supersymmetric theories, they have yet to be discovered experimentally.
               The reason many people believe the dark matter is a cold-dark-matter WIMP is precisely because the cold dark matter simulation that we can create in the computer looks a lot like the real universe, whereas every other possibility we’ve tried, including hot dark matter, has turned out to look nothing like the universe. When we started cold dark matter simulations over 15 years ago, our intention was to rule them out as a candidate.

We were following a methodology where you put forward a candidate with the goal of ruling it out in order to narrow down the possibilities. With cold dark matter we failed miserably in that sense. We haven’t been able to rule it out. In all the calculations that we did and the many follow-ups people have done that have extended our work, they all come back to the same thing:  cold dark matter universes look a lot like the real thing.      

  The fact that cold dark matter looks so good in a computer simulation doesn’t prove, of course, that it is the force shaping the universe. Today it is the front runner candidate, but until we actually see a WIMP, we can’t be sure. There are other possibilities that need to be explored and those can be explored within the context of computer simulations.

  The key point of these theories is they require the existence of these hypothetical elementary particles. The proof of the pudding is in the eating; you have to capture one of these particles. So the ultimate test of this cold dark matter theory is to find the cold dark matter directly. Physics is, after all, an empirical, experimentally based human activity. You can’t prove that something is correct by theory. The Greeks thought that the truth could be established by pure thought, but we now know better: the universe is not made that way. We cannot prove the reality of anything just by thinking about it.

       It’s hard to prove these particles exist because they’re very weakly interactive; that’s why they are dark: they don’t interact with anything. Cold dark matter doesn’t experience electromagnetic or nuclear interactions like protons and electrons do. They don’t interact with your apparatus, so trying to detect them in the laboratory is like trying to catch water using a bucket full of holes; it just goes through it.

       Still, there are experiments to detect even these very weakly interacting particles by side effects. If you have a semiconductor, occasionally one of the WIMPs could have a head-on collision with a silicon atom and cause the atom to recoil . Now these hits are very, very  rare so you have to have several kilograms of semiconductor and you are trying to find one atom moving just because it gets hit by a WIMP. Until these experimental searches succeed we cannot be certain that the theories are correct. But the exciting part is that the experiments are in place and the particles are detectable. If they exist, we will know about them in a few years.

Cosmic Strings And Cosmology

One of the outstanding problems in cosmology today is developing a more precise understanding of structure formation in the universe, that is, the origin of galaxies and other large-scale structures. Existing theories for the structure formation of the Universe fall into two categories, based either upon the amplification of quantum fluctuations in a scalar field during inflation, or upon a symmetry breaking phase transition in the early Universe which leads to the formation of topological defects. While techniques for computing density perturbations for the former are well established, only little quantitative work exists for the latter due to calculational difficulties in modelling  nonlinear effects, especially for cosmic string models.

We know that topological defects are an inevitable consequence of unification theory during the symmetry breaking. Among other defects, cosmic strings have been proved to be the most potential one for cosmic structure formation. The cosmic string scenario predated inflation as a realistic structure formation model, but it has proved computationally much more challenging to make robust predictions with which to confront observations. Only until recently, significant progress in understanding cosmic strings as seeds for large-scale structure and Cosmic Microwave Background (CMB) anisotropies has been achieved.

The mechanism

Cosmic strings appear in the form of carrying energy, resulting from the symmetry breaking phase transition in the early universe. Through gravitaional interactions, they serve as seed of structure formation attracting their neighbouring matter. In terms of relativity (see the picture below), we can see the geodesic path of light curved towards a string when light passing by it. When strings evolve scaling from smaller scales to larger ones, they seed perturbations into the matter energy density of the universe.Structure formation (CDM vs. HDM) by cosmic strings
the distribution of matter energy density of the universe
(box size: 128 Mpc/h; 2D projection)

Gaussian vs non-Gaussian
the distribution of matter energy density of the universe
(box size: 120 Mpc/h; isodensity surface)

So Never Get Into The Space War..

With all this frightfulness flying at your ship, you’d want some kind of defense, besides just hoping they’ll miss. As mentioned before, advances in effectiveness of weapon lethality and defensive protection are mainly focused on the targeting problem. That is, the weapons are generally already powerful enough for a one-hit kill. So the room for improvement lies in increasing the probability that the weapon actually hits the target. And the room for improvement on the defensive side is to decrease the probability of a hit.  Weapons can be improved two ways: increase the precision of each shot (precision of fire), or keep the same precision but increase the number of shots fired (volume of fire). Precision of fire is governed by [a] the location of the target when the weapons fire arrives, [b] the flight path of the weapons fire given characteristic of the shot and the environment though which the shot passes, and [c] the weapon’s aiming precision. Volume of fire is governed by [d] the weapon’s rate of fire and [e] the lethality of a given shot.

A defense can interfere with the [a] location of the target by evasive maneuvers. There isn’t really a way to interfere with [b] the characteristics of a shot, short of inserting a saboteur into the crew of the firing ship. A defense can interfere with the environment through which the shot passes by such things as jamming the weapon’s homing frequencies or clouds of anti-laser sand (which may work in the Traveller universe, but not in reality). There isn’t really a way to directly interfere with [c] the weapon’s aiming precision (again short of a saboteur), though one can indirectly do so by decreasing the target’s signature, increasing the range or jamming the firing ship’s targeting sensors and degrade their targeting solution.

Finally, while one cannot do much about the [d] weapon’s rate of fire, the [e] lethality of a given shot can be effectively reduced by rendering harmless shots that actually hit. This is done by armor, point defense, and science-fictional force fields. 

If the pressurized habitable section of your warship was one single area, a hull breech would depressurize the entire ship (I was going to recount the ancient joke about “why is a virgin like a balloon”, but luckily good sense intervened). A prudent warship design would use air tight bulkheads to divide the interior of the pressurized section into separate areas. This comes under the heading of “not keeping all your eggs in one basket”. The keyword is redundancy

For the same reason, you’d want back-up life-support systems, power plants, control rooms, and other vital components. And these duplicate systems should be located in widely separated parts of the ship. Otherwise a single lucky enemy weapon shot could take both of them out.Even in the non-pressurized section, bulkheads can help contain destructive effects of hostile weapons fire. So an explosive warhead, with any luck, will merely damage the interior of one compartment, instead of gutting the entire interior of the ship. 

Recently a discussion about “armor belts” and the durability of space warships has cropped up by me. This got me thinking about compartments and how they’d be an integral part of a ships survival. Modern naval vessels are divided up into compartments to make them more survivable. Compare a naval frigate to a main battle tank. A tank is basically one compartment. Breach its (very thick) armor and you wreck the tank, since the hit will usually kill the entire crew and/or destroy the internal systems. A frigate however has multiple compartments. Breach the hull of the frigate and while you might wreck one compartment, the entire ship will still float and will often still be able to fight. You have to wreck many compartments, or very specific compartments, in order to mission-kill the ship. It seems to me this vital part of naval design would not be overlooked in space warship design. Beyond the obvious benefits of making it easier to control atmospheric leaks, a space warship built with many compartments that can be isolated would gains a structural benefit in combat. 

Now, compartments would be worthless if one hit could completely disable vital systems like life support or command-and-control. Thus all these systems would be distributed all across the ship, with multiple redundancies. Thus if you lose a compartment with life support systems, you have others to fall back on. Having the main CiC compartment destroyed will not totally eliminate your ability to control the ship. This is standard for real world navy ships. Engine systems, command rooms (bridges, CiC’s, etc.) would have secondary locations kept manned in battle in case the main compartments for them are destroyed. 

  This is also why those compartments would be buried as deep inside the ship as possible. No sense in making things easy for your enemy. True, on modern wet navey warships bridges are still mainly at the highest point of the ship, but that’s mainly to facilitate visual tracking and identification. In space, you cannot see the enemy with the naked eye anyway, so you might as well put your command centers where the enemy has to destroy the entire ship to get at it.  

Armor is a shell of strong material encasing and protecting your tinfoil spacecraft. Unfortunately as a general rule, armor is quite massive, so it really cuts into your payload allowance.  

  Basically, the energy requirement to damage a surface is measured in joules/cm2. If you exceed that value, you do damage, otherwise you fail. Keep in mind that a Joule is the same thing as a watt-second.  There are three ways that weapon energy damages a surface: thermal kill, impulse kill, and drilling. Thermal kill destroys a surface by superheating it. Impulse kill destroys a surface by thermal shock. In the calculations for the SDI, the amount to thermal kill a flimsy Soviet missile is about 1 to 10 kilojoules/cm2 (100 MJ/m2) deposited over a period of a second. The same energy deposited over a millionth of a second is required for an impulse kill. Since the laser beam tends to be meters wide, the beam energy is in the hundreds of megaJoules.

However, neither thermal kill nor impulse kill works very well with armor. So we use the third method: drilling. The amount of energy required to drill through an object is within a factor of 2 or so of the heat of vaporization of that object. There are also two other limits: the maximum aspect ratio of the hole is usually less than 50:1, and the actual drilling speed, for efficient drilling, is limited to about 1 meter per second (depending on the material).    

  

ThereforeTherefore, the best anti-laser armor will be that material with the highest vaporization energy for its mass. The best candidate is some form of carbon, at 29.6 kilojoules/gram. You do not want a form that is soft or easily powdered, or the vapor action under laser impact will blow out flakes of armor, allowing the laser to penetrate much faster. Steel has a higher vaporization energy, but it masses more as well.    

Under laboratory conditions, if an armor layer was 5 g/cm2 of carbon, burning through a 1 cm2 (1.12 cm diameter) spot of armor would take about 148 kilojoules and 20 milliseconds. An AV:T laser cannon with 50 megaJoules could burn through 330 such armor layers in a few seconds, under laboratory conditions (i.e., enough layers to burn through the entire ship the long way).    

However, under combat conditions there is no way one could focus the laser down that tiny and keep it on the same spot on the target ship for multiple seconds.    

It would be better to use a beam focused down to a larger 10 cm2 spot (11.2 cm diameter). Granted the beam power required to penetrate jumps from 148 kilojoules to 15 megaJoules, but now if we have an uncertainty in the target’s velocity of up to 5 meters per second it doesn’t matter.    

Of course, if price is no object, you can do better than carbon. Boron has a vaporization energy of 45.3 kilojoules/gram and is only slightly denser than carbon. Expensive, though.    

In a 1984 paper on strategic missile defense, it suggested that your average ICBM would require about 10 kilojoules/cm2 to kill it. This would rise to 20 to 30 kilojoules/cm2 with ablative armor, and it would be tripled if the ICBM was spinning on its long axis since the laser couldn’t dwell on the same spot 100% of the time.    

As a side note, a Whipple shield is very effective at stopping hypervelocity weapons. With kinetic weapons at closing velocities in excess of 10 km/sec, you’re getting into the realm where armor is less important than blow-through. For armor, you want something that will resist being turned into a plasma for as long as is possible, followed by gaps made of vacuum to make it a Whipple shield.    

Anti-radiation armor is discussed here.    

In science fiction movies and television, we have never really seen all of these features at once. Ironically Star Trek managed to get the distributed systems part correct, we eventually even saw that Federation starships had “battle bridges” to provide emergency control should the main bridge be damaged. But Star Trek has utterly failed to put the bridge in defended positions, or show proper compartments in their designs (As David Gerrold noted, that silly bridge perched on the saucer top of the Starship Enterprise would have been shot off a long time ago). Apparently they rely on their handwavium deflector shields to do the job, which is great until you run out of power. Battlestar Galactica came pretty close, though. The ships systems are distributed, the ship itself compartmentalized, and it has a bridge buried deep in the hull. We just never see redundant engine rooms or command centers, which is probably more of a failing of the script writers than of design. In novels we see this idea used properly, though. The Honorverse novels showcase the benefits of compartmentalization in a very obvious and graphic form, in nearly every novel.

Creation of “negative” mass is the key to success for advanced alien and future human civilizations

The word may sound weird to conventional physicists but that is the key if future human civilization is to compete with that of the advanced aliens. All the dilemma of physical universe can be brought under control once we learn the concept of artificially converting masses into “negative masses”.

The traditional concept of mass comes from the motion of quarks bound into protons and neutrons with mass-less gluons. The proton is the mass that will exist even if the motion of the quarks is taken away. The masses coming from the quarks and the electrons come from the motion called Higgs field. There is another component of mass called Lightest Super-partner Mass (LSP) called the dark matter particles that were formed during the start of our Physical Universe with the big bang. The dark matter formed through massive decay mechanisms. This is where the traditional physics stop. And that is where advanced alien civilizations edge us in term of technical knowledge and capabilities.

Think about the motion of the quarks in an atom resulting in mass. Also think about the dark matter that can cause anti-gravity effects. Now extend the two into something common and you get the clue. It is possible to create artificial “negative mass” through generation of reverse angular momentum of the quarks in the atoms. Obviously it cannot be done with traditional particle accelerators and needs “quarks decelerators”.

Once the negative mass is created, all the puzzles of time travel, bending space and time, movement in and out of parallel universe – all can be instantaneously solved.

When you enter a black hole the biggest challenge is not to get crushed while approaching the core which can simulate points of singularity for anything with mass. But as you enter the black hole, if you can accelerate the process of making your mass negative you can easily pass through the black hole without any problem. The same holds true in a wormhole. Once a space ship enters a wormhole it needs to convert its mass to a negative factor. The onboard computers are able to control the mass factor from positive to negative and so on just like air planes balances weight while take off, during flight and landing. Once mass of the entity can be manipulated the travel through wormholes will be easy. This will not only enable us to travel to different time dimensions but also into parallel universes and beyond.

The biggest challenge in bending space and time comes from creation of the “negative mass” components that will let us accelerate, maneuver and stabilize in a Black hole and wormhole.

The alien civilizations perform these mass conversions all the time. Becoming net-net mass less provides the key to the further technological advancement of our civilization.
Khuri, R. (2003). Dark matter as dark energy Physics Letters B, 568 (1-2), 8-10 DOI: 10.1016/j.physletb.2003.06.051

Time Travel And Interdimensional Voyages

Time travel is no longer regarded as strictly science fiction. For years the concept of time travel has been the topic of science fiction novels and movies, and has been pondered by great scientists throughout history. Einstein’s theories of general and special relativity can be used to actually prove that time travel is possible. Government research experiments have yielded experimental data that conclusively illustrate that fast moving aircraft have traveled into the future. This phenomenon is due to the principal of time dilation, which states that bodies moving at high velocities experience a time that ticks slower than the time measured at zero velocity.3 Not as much time elapses for a moving body as does for everything else. Phenomena known as wormholes and closed timelike curves are possible means of time travel into the future and the past.4 Traveling into the past is a task which is much more difficult than traveling into the future. This feat has not yet been accomplished -to our knowledge- and its theory involves complicated scenarios of tears in four dimensional space-time, and traveling near the speed of light. Obstacles which prevent our hubris attempts to cheat time include our inability to move even close to the speed of light, and finding a source of energy as powerful as an exploding star. Simply because the proposal of time travel is backed by scientific theory, is no reason to expect that it is easily achievable. Numerous arguments are proposed that that prevent time travel into the past. Both common sense and scientific fact can be used to paint scenarios that become serious obstacles. Not to fear, we have all the time in the world to overcome these minor limitations.

Imagine if you will, that you are one of the people sill alive today that was born prior to 1903, when the first airplane took flight. When you were young the idea of flying would probably have been quite exciting. Some scientists believe that we may presently be living through an identical scenario. The thing that would be so exciting however, would not be flight, but time travel. Leading scientists believe that our children will live to once again see the impossible become routine. Professor Michio Kaku of the University of New York believes that space flight may one day unlock the secret of time itself.This will require the development of spacecraft that can travel at speeds on the order of two hundred million meters per second, that’s about four hundred and fifty million miles per hour. Craft traveling at this speed will take us near the speed of light, where time actually slows down. This is what’s known as time dilation. Einstein’s theories predict that the faster a spacecraft moves, the slower time ticks inside of it. Imagine that a rocket ship takes off from earth and approaches the speed of light. If we were to watch it from earth with a very powerful telescope as it traveled away from us, we would see everyone inside the ship as being frozen in time. To us their time would slow down, but to them nothing would change!

This has been measured in the laboratory and on location using atomic clocks, aircraft, satellites and rockets. It is proven that time slows down the faster you move. In 1975 Professor Carol Allie of the University of Maryland tested Einstein’s theory using two synchronized atomic clocks. One clock was loaded on a plane and flown for several hours, while the other clock remained on the ground at the air base. Upon return, the clock on board the plane was found to be ever so slightly slower that the one on the ground. This was not due to experimental error, and has been repeated numerous times with the same result. This difference in time is even more pronounced in satellites such as the space station. This is because these objects are traveling at speeds much faster and for much longer periods than possible in an airplane. The faster an object moves, the more time is distorted.

Now that we know that it is possible to travel into the future by moving at great speeds, the next problem is how to travel in time a respectable amount without having to sit in a fast moving spaceship for years. This problem is solved by the theoretical existence of what are know as closed timelike curves, and wormholes.

Einstein’s special and general theories of relativity combine three-dimensional space with time to form four dimensional space-time.Space-time consists of points or events that represent a particular place at a particular time. Your entire life thus forms a sort of twisting, turning worm in space time! The tip of the worm’s tail would be your birth and its head is the event of your death. The line which this worm creates with its body is called that object’s worldline. Einstein predicts that worldlines can be distorted by massive bodies such as black holes. This is essentially the origin of gravity, remember. Now if an object’s worldline were to be distorted so much as to form a loop that connected with a point on itself that represented an earlier place and time, it would create a corridor to the past! Picture a loop to loop track that smashes into itself as it comes back around. This closed loop is called a closed timelike curve.  Timelike means that the body under consideration experiences time that increases in one direction along its worldline.2 Princeton University physicist John A. Wheeler, and Kip S. Thorne of Cal. Tech. have shown that a closed timelike curve is one way to create a kind of shortcut through space-time called a wormhole.

Wormholes are holes in the fabric of four dimensional space-time, that are connected, but which originate at different points in space and at different times. They provide a quick path between two different locations in space and time. This is the four dimensional equivalent of pinching two pieces of a folded sheet of paper together to make contact across the gap. Distortions in space cause the points separated by the gap to bulge out and connect. This forms a wormhole through which something could instantaneously travel to a far away place and time.4 No more problems of traveling in a rocket ship for years to get into the future! This is essentially what was written about in “Alice in Wonderland’s Through the Looking Glass.” Her looking glass was a wormhole that connected her home in Oxford, with wonderland. All she had to do was climb into her looking glass and she would emerge on the other side of forever. In reality however, it would require a much more elaborate scheme to create a wormhole that connects two different points in space-time. First it would require the construction of two identical machines consisting of two huge parallel metal plates that are electrically charged with unbelievable amounts of energy. When the machines are placed in proximity of each other, the enormous amounts of energy -about that of an exploding star- would rip a hole in space-time and connect the two machines via a wormhole. This is possible, and the beginnings of it have been illustrated in the lab by what is known as the Casimir Effect. The next task would be to place one of these machines on a craft that could travel at close to the speed of light. The craft would take one machine on a journey while it was still connected to the one on earth via the wormhole. Now, a simple step into the wormhole would transport you to a different place and a different time.

Wormholes and closed timelike loops appear to be the main ways that time travel into the past would be possible. The limitation on this time travel into the past is that it would be impossible to travel back to a time before the machine was originally created. Although the aforementioned theories of general relativity are consistent for closed timelike curves and wormholes, the theories say nothing about the actual process of traveling through them. Quantum mechanics can be used to model possible scenarios, and yields the probability of each possible output. Quantum mechanics, when used in the context of time travel, has a so-called many-universe interpretation. This was first proposed by Hugh Everett III in 1957.3 It encompasses the idea that if something can physically happen, it does in some universe. Everett says that our reality is only one of many equally valid universes. There is a collection of universes, called a multiverse. Every multiverse has copies of every person, structure, and atom. For every possible event, every possible outcome is said to be played out on a different universe. This interpretation of quantum mechanics is quite controversial however, but does elicit the notion that it may be impossible to travel backward in time to our own universe or dimension. One must consider what past would be the destination of a time traveler. The notion that time travel could link parallel universes, has been anticipated in science fiction novels, and is even depicted in the popular television series “Sliders.” In this program, a “sliding machine” creates a wormhole that links two parallel dimensions. Each week the group of “sliders” jump into the wormhole and emerge in the same place and time, but a different dimension. They can run into their other selves and experience a reality that has yielded a vastly different society than their own. The interesting thing is that the stuff of science fiction, can be deduced from existing physical theory. All the claims made about time travel are consequences of basic scientific laws and standard quantum mechanics.

The proposal of time travel is backed by scientific theory, but that is not enough to make it realistically possible. Numerous arguments are proposed that that prevent time travel into the past. Both common sense and scientific fact construct serious obstacles. A major argument against time travel into the past is called the autonomy principle, better know as the grandfather paradox. This paradox is created when a time traveler goes back in time to meet his or her grandfather. Now upon their introduction it would be possible to change the course of events that lead up to your grandfather and grandmother marrying. You could tell him something about a family secret to convince him you are who you say you are, and he may proceed to tell his soon to be wife. She may in turn doubt his sanity and have him committed. Thus your grandparents would never have your mother, and therefore you couldn’t be born! But then how could you have ever existed to travel back in time if you don’t exist? You would have had to have been created via autonomy. The next question would be, if your mother was never born, then when you return to the future would anything you did in your life exist? Or would you, your friends, your home etc. never have existed? This is clearly an inconsistency paradox that would rule out time travel, yet interestingly enough the laws of physics do not forbid such excursions. The multiverse concept eradicates the problem of the autonomy principal, because it allows time travel to the past, but to a different universe. You would meet the person who was your grandfather in your universe, but never married your grandmother in his universe. In the universe that you traveled to, you never existed.

Another argument of impossibility is called the chronology principal. This principal states that time travelers could bring information to the past that could be used to create new ideas and products. This would involve no creative energy on the part of the “inventor.” Imagine that Pablo Ruiz y Picasso, the most influential and successful artist of the 20th century, were to travel back in time to meet his younger self. Assuming he stays in his correct universe, he could give his younger self his portfolio containing copies of his paintings, sculptures, graphic art, and ceramics. The young version of Picasso could then meticulously copy the reproductions, profoundly and irrevocably affecting the future of art. Thus, the reproductions exist because they are copied from the originals, and the originals exist because they are copied from the reproductions. No creative energy would have ever been expended to create the masterpieces! 3 This chronology principal rules out travel into the past.

A notion that was once nothing more than science fiction, is now a concept that’s becoming reality. Einstein’s theories of general and special relativity can be used to actually prove that time travel is possible, and research has shown that fast moving craft can travel into the future. Time dilation is the easiest method because it merely requires high velocity motion to experience time travel.3Phenomena known as wormholes and closed timelike curves are possible means of time travel into the future and the past.4 Traveling into the past is a task which is much more difficult however. Its theory involves complicated scenarios of tears in four dimensional space-time, energy equivalent to that of an exploding star, and traveling near the speed of light. Both common sense and scientific fact can be used to paint scenarios that become serious obstacles. Yet even these hindrances can be explained away! If the multiverse concept is reality, then most present ideas of time travel are based on a false reality. If time travel is completely impossible then the reason has yet to be discovered.


J. -P. Luminet (2009). Time, Topology and the Twin Paradox arxiv.org arXiv: 0910.5847v1

Myth of Strange Matter And Black Hole at LHC

The LHC, like other particle accelerators, recreates the natural phenomena of cosmic rays under controlled laboratory conditions, enabling them to be studied in more detail. Cosmic rays are particles produced in outer space, some of which are accelerated to energies far exceeding those of the LHC. The energy and the rate at which they reach the Earth’s atmosphere have been measured in experiments for some 70 years. Over the past billions of years, Nature has already generated on Earth as many collisions as about a million LHC experiments – and the planet still exists. Astronomers observe an enormous number of larger astronomical bodies throughout the Universe, all of which are also struck by cosmic rays. The Universe as a whole conducts more than 10 million million LHC-like experiments per second. The possibility of any dangerous consequences contradicts what astronomers see – stars and galaxies still exist.

Microscopic black holes

Microscopic black holes which are said to be formed at LHC and related dooms day myth is one of the most popularized myth. I can remember when news channels were highlighting those myths. Well, how this implication came to existence?

Nature forms black holes when certain stars, much larger than our Sun, collapse on themselves at the end of their lives. They concentrate a very large amount of matter in a very small space. Speculations about microscopic black holes at the LHC refer to particles produced in the collisions of pairs of protons, each of which has an energy comparable to that of a mosquito in flight. Astronomical black holes are much heavier than anything that could be produced at the LHC.

According to the well-established properties of gravity, described by Einstein’s relativity, it is impossible for microscopic black holes to be produced at the LHC. There are, however, some speculative theories that predict the production of such particles at the LHC. All these theories predict that these particles would disintegrate immediately. Black holes, therefore, would have no time to start accreting matter and to cause macroscopic effects.

Although theory predicts that microscopic black holes decay rapidly, even hypothetical stable black holes can be shown to be harmless by studying  the consequences of their production by cosmic rays.  Whilst collisions at the LHC differ from cosmic-ray collisions with astronomical bodies like the Earth in that new particles produced in LHC collisions tend to move more slowly than those produced by cosmic rays, one can still demonstrate their safety.  The specific reasons for this depend whether the black holes are electrically charged, or neutral. Many stable black holes would be expected to be electrically charged, since they are created by charged particles.  In this case they would interact with ordinary matter and be stopped while traversing the Earth or Sun, whether produced by cosmic rays or the LHC. The fact that the Earth and Sun are still here rules out the possibility that cosmic rays or the LHC could produce dangerous charged microscopic black holes. If stable microscopic black holes had no electric charge, their interactions with the Earth would be very weak. Those produced by cosmic rays would pass harmlessly through the Earth into space, whereas those produced by the LHC could remain on Earth. However, there are much larger and denser astronomical bodies than the Earth in the Universe. Black holes produced in cosmic-ray collisions with bodies such as neutron stars and white dwarf stars would be brought to rest. The continued existence of such dense bodies, as well as the Earth, rules out the possibility of the LHC producing any dangerous black holes.

Strangelets

Strangelet is the term given to a hypothetical microscopic lump of ‘strange matter’ containing almost equal numbers of particles called up, down and strange quarks. According to most theoretical work, strangelets should change to ordinary matter within a thousand-millionth of a second. But could strangelets coalesce with ordinary matter and change it to strange matter? This question was first raised before the start up of the Relativistic Heavy Ion Collider, RHIC, in 2000 in the United States. A study at the time showed that there was no cause for concern, and RHIC has now run for eight years, searching for strangelets without detecting any. At times, the LHC will run with beams of heavy nuclei, just as RHIC does. The LHC’s beams will have more energy than RHIC, but this makes it even less likely that strangelets could form. It is difficult for strange matter to stick together in the high temperatures produced by such colliders, rather as ice does not form in hot water. In addition, quarks will be more dilute at the LHC than at RHIC, making it more difficult to assemble strange matter. Strangelet production at the LHC is therefore less likely than at RHIC, and experience there has already validated the arguments that strangelets cannot be produced. 

Vacuum bubbles

There have been speculations that the Universe is not in its most stable configuration, and that perturbations caused by the LHC could tip it into a more stable state, called a vacuum bubble, in which we could not exist. If the LHC could do this, then so could cosmic-ray collisions. Since such vacuum bubbles have not been produced anywhere in the visible Universe, they will not be made by the LHC.

Magnetic monopoles

Magnetic monopoles are hypothetical particles with a single magnetic charge, either a north pole or a south pole. Some speculative theories suggest that, if they do exist, magnetic monopoles could cause protons to decay. These theories also say that such monopoles would be too heavy to be produced at the LHC. Nevertheless, if the magnetic monopoles were light enough to appear at the LHC, cosmic rays striking the Earth’s atmosphere would already be making them, and the Earth would very effectively stop and trap them. The continued existence of the Earth and other astronomical bodies therefore rules out dangerous proton-eating magnetic monopoles light enough to be produced at the LHC.

Other aspects of LHC safety:

Concern has recently been expressed that a ‘runaway fusion reaction’ might be created in the LHC carbon beam dump. The safety of the LHC beam dump had previously been reviewed by the relevant regulatory authorities of the CERN host states, France and Switzerland. The specific concerns expressed more recently have been addressed in a technical memorandum by Assmann et al. As they point out, fusion reactions can be maintained only in material compressed by some external pressure, such as that provided by gravity inside a star, a fission explosion in a thermonuclear device, a magnetic field in a Tokamak, or by continuing isotropic laser or particle beams in the case of inertial fusion. In the case of the LHC beam dump, it is struck once by the beam coming from a single direction. There is no countervailing pressure, so the dump material is not compressed, and no fusion is possible.

Concern has been expressed that a ‘runaway fusion reaction’ might be created in a nitrogen tank inside the LHC tunnel. There are no such nitrogen tanks. Moreover, the arguments in the previous paragraph prove that no fusion would be possible even if there were.

Finally, concern has also been expressed that the LHC beam might somehow trigger a ‘Bose-Nova’ in the liquid helium used to cool the LHC magnets. A study by Fairbairn and McElrath has clearly shown there is no possibility of the LHC beam triggering a fusion reaction in helium.

We recall that ‘Bose-Novae’ are known to be related to chemical reactions that release an infinitesimal amount of energy by nuclear standards. We also recall that helium is one of the most stable elements known, and that liquid helium has been used in many previous particle accelerators without mishap. The facts that helium is chemically inert and has no nuclear spin imply that no ‘Bose-Nova’ can be triggered in the superfluid helium used in the LHC.

Comments on the papers by Giddings and Mangano, and by LSAG

The papers by Giddings and Mangano and LSAG demonstrating the safety of the LHC have been studied, reviewed and endorsed by leading experts from the CERN Member States, Japan, Russia and the United States, working in astrophysics, cosmology, general relativity, mathematics, particle physics and risk analysis, including several Nobel Laureates in Physics. They all agree that the LHC is safe.

The paper by Giddings and Mangano has been peer-reviewed by anonymous experts in astrophysics and particle physics and published in the professional scientific journal Physical Review D. The American Physical Society chose to highlight this as one of the most significant papers it has published recently, commissioning a commentary by Prof. Peskin from the Stanford Linear Accelerator Laboratory in which he endorses its conclusions. The Executive Committee of the Division of Particles and Fields of the American Physical Society has issued a statement endorsing the LSAG report.

The LSAG report has been published by the UK Institute of Physics in its publication Journal of Physics G. The conclusions of the LSAG report were endorsed in a press release that announced this publication.

The conclusions of LSAG have also been endorsed by the Particle and Nuclear Physics Section (KET) of the German Physical Society. A translation into German of the complete LSAG report may be found on the KET website, as well as here. (A translation into French of the complete LSAG report is also available.)

Thus, the conclusion that LHC collisions are completely safe has been endorsed by the three respected professional societies of physicists that have reviewed it, which rank among the most highly respected professional societies in the world.

The overwhelming majority of physicists agree that microscopic black holes would be unstable, as predicted by basic principles of quantum mechanics. As discussed in the LSAG report, if microscopic black holes can be produced by the collisions of quarks and/or gluons inside protons, they must also be able to decay back into quarks and/or gluons. Moreover, quantum mechanics predicts specifically that they should decay via Hawking radiation.

Nevertheless, a few papers have suggested that microscopic black holes might be stable. The paper by Giddings and Mangano and the LSAG report analyzed very conservatively the hypothetical case of stable microscopic black holes and concluded that even in this case there would be no conceivable danger. Another analysis with similar conclusions has been documented by Dr. Koch, Prof. Bleicher and Prof. Stoecker of Frankfurt University and GSI, Darmstadt, who conclude:

“We discussed the logically possible black hole evolution paths. Then we discussed every single outcome of those paths and showed that none of the physically sensible paths can lead to a black hole disaster at the LHC.”

Professor Roessler (who has a medical degree and was formerly a chaos theorist in Tuebingen) also raised doubts on the existence of Hawking radiation. His ideas have been refuted by Profs. Nicolai (Director at the Max Planck Institute for Gravitational Physics – Albert-Einstein-Institut – in Potsdam) and Giulini, whose report (see here for the English translation, and here for further statements) point to his failure to understand general relativity and the Schwarzschild metric, and his reliance on an alternative theory of gravity that was disproven in 1915. Their verdict:

“[Roessler’s] argument is not valid; the argument is not self-consistent.”

The paper of Prof. Roessler has also been criticized by Prof. Bruhn of the Darmstadt University of Technology, who concludes that:

“Roessler’s misinterpretation of the Schwarzschild metric [renders] his further considerations … null and void. These are not papers that could be taken into account when problems of black holes are discussed.”

A hypothetical scenario for possibly dangerous metastable black holes has recently been proposed by Dr. Plaga. The conclusions of this work have been shown to be inconsistent in a second paper by Giddings and Mangano, where it is also stated that the safety of this class of metastable black hole scenarios is already established by their original work. Finally I can decern that such existing myths are nonexistent.

KOCH, B., BLEICHER, M., & STOCKER, H. (2009). Exclusion of black hole disaster scenarios at the LHC Physics Letters B, 672 (1), 71-76 DOI: 10.1016/j.physletb.2009.01.003

 
PANAGIOTOU, A., & KATSAS, P. (2007). Searching for Strange Quark Matter with the CMS/CASTOR Detector at the LHC Nuclear Physics A, 782 (1-4), 383-391 DOI: 10.1016/j.nuclphysa.2006.10.020