Black Holes Serving as Particle Accelerator

A particle collision at the RHIC. No strangele...

Image via Wikipedia

Black Hole Particle Accelerator!! Sounds so strange !! Well, this is not that much strange as it may purport. Particle accelerators are devices which are generally used to raise the particles at very high energy levels.

Beams of high-energy particles are useful for both fundamental and applied research in the sciences, and also in many technical and industrial fields unrelated to fundamental research. It has been estimated that there are approximately 26,000 accelerators worldwide. Of these, only about 1% are the research machines with energies above 1 GeV (that are the main focus of this article), about 44% are for radiotherapy, about 41% for ion implantation, about 9% for industrial processing and research, and about 4% for biomedical and other low-energy research.

For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as essentially 2-body interactions of the quarks and gluons of which they are composed. Thus elementary particle physicists tend to use machines creating beams of electrons, positrons, protons, and anti-protons, interacting with each other or with the simplest nuclei (e.g., hydrogen or deuterium) at the highest possible energies, generally hundreds of GeV or more. Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure, interactions, and properties of the nuclei themselves, and of condensed matter at extremely high temperatures and densities, such as might have occurred in the first moments of the Big Bang. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon.

[Image Details: A typical Cyclotron]

In current we accelerate particles at high energy levels by increasing the kinetic  energy of a particle and applying a very high electromagnetic field. Particles are accelerated according to Lorentz Force. However there are some limitations of such particle accelerators like we can’t accelerate them at very high energy levels. It needs a lot of  distance to be covered up before acquiring a desired speed.

This can be simply accumulated from the astounding details of LHC. The precise circumference of the LHC accelerator is 26 659 m, with a total of 9300 magnets inside. Not only is the LHC the world’s largest particle accelerator, just one-eighth of its cryogenic distribution system would qualify as the world’s largest fridge. It can accelerate particles upto the energy level of 14.0 TeV.

As it is pretty obvious that to accelerate particles above this energy level it would become almost imposssible.

An advanced civilization with the development level of typeIII or type IV would more likely choose to implement black holes rather than engineering a LHC or Tevatron at astrophysical scale. Kaluza Klein black holes are excellent for this purpose. Kaluza Klein black holes are very similar to Kerr black holes except they are charged.

Kerr Black Holes

Kerr spacetime is the unique explicitly defined model of the gravitational field of a rotating star. The spacetime is fully revealed only when the star collapses, leaving a black hole — otherwise the bulk of the star blocks exploration. The qualitative character of Kerr spacetime depends on its mass and its rate of rotation, the most interesting case being when the rotation is slow. (If the rotation stops completely, Kerr spacetime reduces to Schwarzschild spacetime.)

The existence of black holes in our universe is generally accepted — by now it would be hard for astronomers to run the universe without them. Everyone knows that no light can escape from a black hole, but convincing evidence for their existence is provided their effect on their visible neighbors, as when an observable star behaves like one of a binary pair but no companion is visible.

Suppose that, travelling our spacecraft, we approach an isolated, slowly rotating black hole. It can then be observed as a black disk against the stars of the background sky. Explorers familiar with the Schwarzschild black holes will refuse to cross its boundary horizon. First of all, return trips through a horizon are never possible, and in the Schwarzschild case, there is a more immediate objection: after the passage, any material object will, in a fraction of a second, be devoured by a singularity in spacetime.

If we dare to penetrate the horizon of this Kerr black hole we will find … another horizon. Behind this, the singularity in spacetime now appears, not as a central focus, but as a ring — a circle of infinite gravitational forces. Fortunately, this ring singularity is not quite as dangerous as the Schwarzschild one — it is possible to avoid it and enter a new region of spacetime, by passing through either of two “throats” bounded by the ring (see The Big Picture).

In the new region, escape from the ring singularity is easy because the gravitational effect of the black hole is reversed — it now repels rather than attracts. As distance increases, this negative gravity weakens, just as on the positive side, until its effect becomes negligible.

A quick departure may be prudent, but will prevent discovery of something strange: the ring singularity is the outer equator of a spatial solid torus that is, quite simply, a time machine. Travelling within it, one can reach arbitrarily far back into the past of any entity inside the double horizons. In principle you can arrange a bridge game, with all four players being you yourself, at different ages. But there is no way to meet Julius Caesar or your (predeparture) childhood self since these lie on the other side of two impassable horizons.

This rough description is reasonably accurate within its limits, but its apparent completeness is deceptive. Kerr spacetime is vaster — and more symmetrical. Outside the horizons, it turns out that the model described above lacks a distant past, and, on the negative gravity side, a distant future. Harder to imagine are the deficiencies of the spacetime region between the two horizons. This region definitely does not resemble the Newtonian 3-spacebetween two bounding spheres, furnished with a clock to tell time. In it, space and time are turbulently mixed. Pebbles dropped experimentally there can simply vanish in finite time — and new objects can magically appear.

Recently, it was made an interesting observation that black holes can accelerate particles up to unlimited energies Ecm in the centre of mass frame. These results have been obtained for the Kerr metric (they were also extended to the extremal Kerr-Newman one. It was demonstrated that the effect in question exists in a generic black hole background (so a black hole can be surrounded by matter) provided a black hole is rotating. Thus, rotation seemed to be an essential part of the effect. It is also necessary that one of colliding particles have the angular momentum L1 = E1/ωH  where E is the energy, ωH is the angular velocity of a generic rotating black hole. If  ωH→0, L1 →1, so for any particles  with finite L the effect becomes impossible. Say, in the Schwarzschild space-time, the ratio Ecm/m (m is the mass of particles) is finite and cannot exceed 2√5 for particles coming from infinity.

Meanwhile, sometimes the role played by the angular momentum and rotation, is effectively modeled by the electric charge and potential in the spherically-symmetric space-times. So, one may ask the question: can we achieve the infinite acceleration without rotation, simply due to the presence of the electric charge? Apart from interest on its own., the positive answer would be also important in that spherically-symmetric space-times are usually much simpler and admit much more detailed investigation, mimicking relevant features of rotating space-times. In a research paper by Oleg B. Zaslavskii, they showed that centre of mass energy can reach reach up to very high level and may gain almost infinite centre of mass energy before collision. Following the analysis and energy equations ,the answer is ‘Yes!’ .
The similar conclusion were also extracted by Pu Zion Mao in a research paper ‘Kaluza-Klein Balck holes Serving as Particle Acclerator.
Consider two mass particles are falling into a black hole having angular momentum of L1 and L2.

Obviously, plot r  and centre of mass energy near horizon of Kaluza Klein Black Hole in Fig.1 and Fig.2, from which we can see that there exists a critical
angular momentum Lc = 2μ/√(1-ν²)  for the geodesics of particle to reach the horizon. If L > Lc, the geodesics never reach the horizon. On the other hand, if the angular momentum is too small, the particle will fall into the black hole and the CM energy for the collision is limited. However, when L1 or L2 takes the angular momentum L = 2μ/√(1-ν²) , the CM energy is unlimited with no restrictions on the angular momentum per unit mass J/M of the black hole.
Now, it seems very mesmerizing that advanced alien civilization would more likely prefer to implement black holes as particle accelerator. However, that implementation could be moderated.

Negative Energy And Interstellar Travel

Can a region of space contain less than nothing? Common sense would say no; the most one could do is remove all matter and radiation and be left with vacuum. But quantum physics has a proven ability to confound intuition, and this case is no exception. A region of space, it turns out, can contain less than nothing. Its energy per unit volume–the energy density–can be less than zero.

Needless to say, the implications are bizarre. According to Einstein’s theory of gravity, general relativity, the presence of matter and energy warps the geometric fabric of space and time. What we perceive as gravity is the space-time distortion produced by normal, positive energy or mass. But when negative energy or mass–so-called exotic matter–bends space-time, all sorts of amazing phenomena might become possible: traversable wormholes, which could act as tunnels to otherwise distant parts of the universe; warp drive, which would allow for faster-than-light travel; and time machines, which might permit journeys into the past. Negative energy could even be used to make perpetual-motion machines or to destroy black holes. A Star Trek episode could not ask for more.

For physicists, these ramifications set off alarm bells. The potential paradoxes of backward time travel–such as killing your grandfather before your father is conceived–have long been explored in science fiction, and the other consequences of exotic matter are also problematic. They raise a question of fundamental importance: Do the laws of physics that permit negative energy place any limits on its behavior?

We and others have discovered that nature imposes stringent constraints on the magnitude and duration of negative energy, which (unfortunately, some would say) appear to render the construction of wormholes and warp drives very unlikely.

Double Negative

Before proceeding further, we should draw the reader’s attention to what negative energy is not.

It should not be confused with antimatter, which has positive energy. When an electron and its antiparticle, a positron, collide, they annihilate. The end products are gamma rays, which carry positive energy. If antiparticles were composed of negative energy, such an interaction would result in a final energy of zero.

One should also not confuse negative energy with the energy associated with the cosmological constant, postulated in inflationary models of the universe. Such a constant represents negative pressure but positive energy.

The concept of negative energy is not pure fantasy; some of its effects have even been produced in the laboratory. They arise from Heisenberg’s uncertainty principle, which requires that the energy density of any electric, magnetic or other field fluctuate randomly. Even when the energy density is zero on average, as in a vacuum, it fluctuates. Thus, the quantum vacuum can never remain empty in the classical sense of the term; it is a roiling sea of “virtual” particles spontaneously popping in and out of existence [see “Exploiting Zero-Point Energy,” by Philip Yam; SCIENTIFIC AMERICAN, December 1997]. In quantum theory, the usual notion of zero energy corresponds to the vacuum with all these fluctuations.

So if one can somehow contrive to dampen the undulations, the vacuum will have less energy than it normally does–that is, less than zero energy.[See, Casimir Starcraft: Zero Point Energy]

  • Negative Energy

Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut. The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.

One problem with Alcubierre’s original model that the interior of the warp bubble is causally disconnected from its forward edge. A starship captain on the inside cannot steer the bubble or turn it on or off; some external agency must set it up ahead of time. To get around this problem, Krasnikov proposed a “superluminal subway,” a tube of modified space-time (not the same as a wormhole) connecting Earth and a distant star. Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube. On the return journey, they could travel through it at warp speed. Like warp bubbles, the subway involves negative energy.

Negative energy is so strange that one might think it must violate some law of physics.

Before and after the creation of equal amounts of negative and positive energy in previously empty space, the total energy is zero, so the law of conservation of energy is obeyed. But there are many phenomena that conserve energy yet never occur in the real world. A broken glass does not reassemble itself, and heat does not spontaneously flow from a colder to a hotter body. Such effects are forbidden by the second law of thermodynamics.

This general principle states that the degree of disorder of a system–its entropy–cannot decrease on its own without an input of energy. Thus, a refrigerator, which pumps heat from its cold interior to the warmer outside room, requires an external power source. Similarly, the second law also forbids the complete conversion of heat into work.

Negative energy potentially conflicts with the second law. Imagine an exotic laser, which creates a steady outgoing beam of negative energy. Conservation of energy requires that a byproduct be a steady stream of positive energy. One could direct the negative energy beam off to some distant corner of the universe, while employing the positive energy to perform useful work. This seemingly inexhaustible energy supply could be used to make a perpetual-motion machine and thereby violate the second law. If the beam were directed at a glass of water, it could cool the water while using the extracted positive energy to power a small motor–providing a refrigerator with no need for external power. These problems arise not from the existence of negative energy per se but from the unrestricted separation of negative and positive energy.

Unfettered negative energy would also have profound consequences for black holes. When a black hole forms by the collapse of a dying star, general relativity predicts the formation of a singularity, a region where the gravitational field becomes infinitely strong. At this point, general relativity–and indeed all known laws of physics–are unable to say what happens next. This inability is a profound failure of the current mathematical description of nature. So long as the singularity is hidden within an event horizon, however, the damage is limited. The description of nature everywhere outside of the horizon is unaffected. For this reason, Roger Penrose of Oxford proposed the cosmic censorship hypothesis: there can be no naked singularities, which are unshielded by event horizons.

For special types of charged or rotating black holes– known as extreme black holes–even a small increase in charge or spin, or a decrease in mass, could in principle destroy the horizon and convert the hole into a naked singularity. Attempts to charge up or spin up these black holes using ordinary matter seem to fail for a variety of reasons. One might instead envision producing a decrease in mass by shining a beam of negative energy down the hole, without altering its charge or spin, thus subverting cosmic censorship. One might create such a beam, for example, using a moving mirror. In principle, it would require only a tiny amount of negative energy to produce a dramatic change in the state of an extreme black hole.

[Image Details: Pulses of negative energy are permitted by quantum theory but only under three conditions. First, the longer the pulse lasts, the weaker it must be (a, b). Second, a pulse of positive energy must follow. The magnitude of the positive pulse must exceed that of the initial negative one. Third, the longer the time interval between the two pulses, the larger the positive one must be – an effect known as quantum interest (c).]

Therefore, this might be the scenario in which negative energy is the most likely to produce macroscopic effects.

Fortunately (or not, depending on your point of view), although quantum theory allows the existence of negative energy, it also appears to place strong restrictions – known as quantum inequalities – on its magnitude and duration.The inequalities bear some resemblance to the uncertainty principle. They say that a beam of negative energy cannot be arbitrarily intense for an arbitrarily long time. The permissible magnitude of the negative energy is inversely related to its temporal or spatial extent. An intense pulse of negative energy can last for a short time; a weak pulse can last longer. Furthermore, an initial negative energy pulse must be followed by a larger pulse of positive energy.The larger the magnitude of the negative energy, the nearer must be its positive energy counterpart. These restrictions are independent of the details of how the negative energy is produced. One can think of negative energy as an energy loan. Just as a debt is negative money that has to be repaid, negative energy is an energy deficit.

In the Casimir effect, the negative energy density between the plates can persist indefinitely, but large negative energy densities require a very small plate separation. The magnitude of the negative energy density is inversely proportional to the fourth power of the plate separation. Just as a pulse with a very negative energy density is limited in time, very negative Casimir energy density must be confined between closely spaced plates. According to the quantum inequalities, the energy density in the gap can be made more negative than the Casimir value, but only temporarily. In effect, the more one tries to depress the energy density below the Casimir value, the shorter the time over which this situation can be maintained.

When applied to wormholes and warp drives, the quantum inequalities typically imply that such structures must either be limited to submicroscopic sizes, or if they are macroscopic the negative energy must be confined to incredibly thin bands. In 1996 we showed that a submicroscopic wormhole would have a throat radius of no more than about 10-32 meter. This is only slightly larger than the Planck length, 10-35 meter, the smallest distance that has definite meaning. We found that it is possible to have models of wormholes of macroscopic size but only at the price of confining the negative energy to an extremely thin band around the throat. For example, in one model a throat radius of 1 meter requires the negative energy to be a band no thicker than 10-21 meter, a millionth the size of a proton.

It is estimated that the negative energy required for this size of wormhole has a magnitude equivalent to the total energy generated by 10 billion stars in one year. The situation does not improve much for larger wormholes. For the same model, the maximum allowed thickness of the negative energy band is proportional to the cube root of the throat radius. Even if the throat radius is increased to a size of one light-year, the negative energy must still be confined to a region smaller than a proton radius, and the total amount required increases linearly with the throat size.

It seems that wormhole engineers face daunting problems. They must find a mechanism for confining large amounts of negative energy to extremely thin volumes. So-called cosmic strings, hypothesized in some cosmological theories, involve very large energy densities in long, narrow lines. But all known physically reasonable cosmic-string models have positive energy densities.

Warp drives are even more tightly constrained, as shown working with us. In Alcubierre’s model, a warp bubble traveling at 10 times lightspeed (warp factor 2, in the parlance of Star Trek: The Next Generation) must have a wall thickness of no more than 10-32 meter. A bubble large enough to enclose a starship 200 meters across would require a total amount of negative energy equal to 10 billion times the mass of the observable universe. Similar constraints apply to Krasnikov’s superluminal subway.

A modification of Alcubierre’s model was recently constructed by Chris Van Den Broeck of the Catholic University of Louvain in Belgium. It requires much less negative energy but places the starship in a curved space-time bottle whose neck is about 10-32 meter across, a difficult feat. These results would seem to make it rather unlikely that one could construct wormholes and warp drives using negative energy generated by quantum effects.

The quantum inequalities prevent violations of the second law. If one tries to use a pulse of negative energy to cool a hot object, it will be quickly followed by a larger pulse of positive energy, which reheats the object. A weak pulse of negative energy could remain separated from its positive counterpart for a longer time, but its effects would be indistinguishable from normal thermal fluctuations. Attempts to capture or split off negative energy from positive energy also appear to fail. One might intercept an energy beam, say, by using a box with a shutter. By closing the shutter, one might hope to trap a pulse of negative energy before the offsetting positive energy arrives. But the very act of closing the shutter creates an energy flux that cancels out the negative energy it was designed to trap.

A pulse of negative energy injected into a charged black hole might momentarily destroy the horizon, exposing the singularity within. But the pulse must be followed by a pulse of positive energy, which would convert the naked singularity back into a black hole – a scenario we have dubbed cosmic flashing. The best chance to observe cosmic flashing would be to maximize the time separation between the negative and positive energy, allowing the naked singularity to last as long as possible. But then the magnitude of the negative energy pulse would have to be very small, according to the quantum inequalities. The change in the mass of the black hole caused by the negative energy pulse will get washed out by the normal quantum fluctuations in the hole’s mass, which are a natural consequence of the uncertainty principle. The view of the naked singularity would thus be blurred, so a distant observer could not unambiguously verify that cosmic censorship had been violated.

Recently it was shown that the quantum inequalities lead to even stronger bounds on negative energy. The positive pulse that necessarily follows an initial negative pulse must do more than compensate for the negative pulse; it must overcompensate. The amount of overcompensation increases with the time interval between the pulses. Therefore, the negative and positive pulses can never be made to exactly cancel each other. The positive energy must always dominate–an effect known as quantum interest. If negative energy is thought of as an energy loan, the loan must be repaid with interest. The longer the loan period or the larger the loan amount, the greater is the interest. Furthermore, the larger the loan, the smaller is the maximum allowed loan period. Nature is a shrewd banker and always calls in its debts. The concept of negative energy touches on many areas of physics: gravitation, quantum theory, thermodynamics. The interweaving of so many different parts of physics illustrates the tight logical structure of the laws of nature. On the one hand, negative energy seems to be required to reconcile black holes with thermodynamics. On the other, quantum physics prevents unrestricted production of negative energy, which would violate the second law of thermodynamics. Whether these restrictions are also features of some deeper underlying theory, such as quantum gravity, remains to be seen. Nature no doubt has more surprises in store.

Wormhole Engineering

By John Gribbin

There is still one problem with wormholes for any hyperspace engineers to take careful account of. The simplest calculations suggest that whatever may be going on in the universe outside, the attempted passage of a spaceship through the hole ought to make the star gate slam shut. The problem is that an accelerating object, according to the general theory of relativity, generates those ripples in the fabric of spacetime itself known as gravitational waves. Gravitational radiation itself, travelling ahead of the spaceship and into the black hole at the speed of light, could be amplified to infinite energy as it approaches the singularity inside the black hole, warping spacetime around itself and shutting the door on the advancing spaceship. Even if a natural traversable wormhole exists, it seems to be unstable to the slightest perturbation, including the disturbance caused by any attempt to pass through it.

But Thorne’s team found an answer to that for Sagan. After all, the wormholes in Contact are definitely not natural, they are engineered. One of his characters explains:

There is an interior tunnel in the exact Kerr solution of the Einstein Field Equations, but it’s unstable. The slightest perturbation would seal it off and convert the tunnel into a physical singularity through which nothing can pass. I have tried to imagine a superior civilization that would control the internal structure of a collapsing star to keep the interior tunnel stable. This is very difficult. The civilization would have to monitor and stabilize the tunnel forever.

But the point is that the trick, although it may be very difficult, is not impossible. It could operate by a process known as negative feedback, in which any disturbance in the spacetime structure of the wormhole creates another disturbance which cancels out the first disturbance. This is the opposite of the familiar positive feedback effect, which leads to a howl from loudspeakers if a microphone that is plugged in to those speakers through an amplifier is placed in front of them. In that case, the noise from the speakers goes into the microphone, gets amplified, comes out of the speakers louder than it was before, gets amplified . . . and so on. Imagine, instead, that the noise coming out of the speakers and into the microphone is analysed by a computer that then produces a sound wave with exactly the opposite characteristics from a second speaker. The two waves would cancel out, producing total silence.

For simple sound waves, this trick can actually be carried out, here on Earth, in the 1990s. Cancelling out more complex noise, like the roar of a football crowd, is not yet possible, but might very well be in a few years time. So it may not be completely farfetched to imagine Sagan’s “superior civilization” building a gravitational wave receiver/transmitter system that sits in the throat of a wormhole and can record the disturbances caused by the passage of the spaceship through the wormhole, “playing back” a set of gravitational waves that will exactly cancel out the disturbance, before it can destroy the tunnel.

But where do the wormholes come from in the first place? The way Morris, Yurtsever and Thorne set about the problem posed by Sagan was the opposite of the way everyone before them had thought about black holes. Instead of considering some sort of known object in the Universe, like a dead massive star, or a quasar, and trying to work out what would happen to it, they started out by constructing the mathematical description of a geometry that described a traversable wormhole, and then used the equations of the general theory of relativity to work out what kinds of matter and energy would be associated with such a spacetime. What they found is almost (with hindsight) common sense. Gravity, an attractive force pulling matter together, tends to create singularities and to pinch off the throat of a wormhole. The equations said that in order for an artificial wormhole to be held open, its throat must be threaded by some form of matter, or some form of field, that exerts negative pressure, and has antigravity associated with it.

Now, you might think, remembering your school physics, that this completely rules out the possibility of constructing traversable wormholes. Negative pressure is not something we encounter in everyday life (imagine blowing negative pressure stuff in to a balloon and seeing the balloon deflate as a result). Surely exotic matter cannot exist in the real Universe? But you may be wrong.

Making  Antigravity

The key to antigravity was found by a Dutch physicist, Hendrik Casimir, as long ago as 1948. Casimir, who was born in The Hague in 1909, worked from 1942 onwards in the research laboratories of the electrical giant Philips, and it was while working there that he suggested what became known as the Casimir effect.

The simplest way to understand the Casimir effect is in terms of two parallel metal plates, placed very close together with nothing in between them (Figure 6). The quantum vacuum is not like the kind of “nothing” physicists imagined the vacuum to be before the quantum era. It seethes with activity, with particle-antiparticle pairs constantly being produced and annihilating one another. Among the particles popping in and out of existence in the quantum vacuum there will be many photons, the particles which carry the electromagnetic force, some of which are the particles of light. Indeed, it is particularly easy for the vacuum to produce virtual photons, partly because a photon is its own antiparticle, and partly because photons have no “rest mass” to worry about, so all the energy that has to be borrowed from quantum uncertainty is the energy of the wave associated with the particular photon. Photons with different energies are associated with electromagnetic waves of different wavelengths, with shorter wavelengths corresponding to greater energy; so another way to think of this electromagnetic aspect of the quantum vacuum is that empty space is filled with an ephemeral sea of electromagnetic waves, with all wavelengths represented.

This irreducible vacuum activity gives the vacuum an energy, but this energy is the same everywhere, and so it cannot be detected or used. Energy can only be used to do work, and thereby make its presence known, if there is a difference in energy from one place to another.

Between two electrically conducting plates, Casimir pointed out, electromagnetic waves would only be able to form certain stable patterns. Waves bouncing around between the two plates would behave like the waves on a plucked guitar string. Such a string can only vibrate in certain ways, to make certain notes — ones for which the vibrations of the string fit the length of the string in such a way that there are no vibrations at the fixed ends of the string. The allowed vibrations are the fundamental note for a particular length of string, and its harmonics, or overtones. In the same way, only certain wavelengths of radiation can fit into the gap between the two plates of a Casimir experiment . In particular, no photon corresponding to a wavelength greater than the separation between the plates can fit in to the gap. This means that some of the activity of the vacuum is suppressed in the gap between the plates, while the usual activity goes on outside. The result is that in each cubic centimetre of space there are fewer virtual photons bouncing around between the plates than there are outside, and so the plates feel a force pushing them together. It may sound bizarre, but it is real. Several experiments have been carried out to measure the strength of the Casimir force between two plates, using both flat and curved plates made of various kinds of material. The force has been measured for a range of plate gaps from 1.4 nanometers to 15 nanometers (one nanometer is one billionth of a metre) and exactly matches Casimir’s prediction.

In a paper they published in 1987, Morris and Thorne drew attention to such possibilities, and also pointed out that even a straightforward electric or magnetic field threading the wormhole “is right on the borderline of being exotic; if its tension were infinitesimally larger . . . it would satisfy our wormhole-building needs.” In the same paper, they concluded that “one should not blithely assume the impossibility of the exotic material that is required for the throat of a traversable wormhole.” The two CalTech researchers make the important point that most physicists suffer a failure of imagination when it comes to considering the equations that describe matter and energy under conditions far more extreme than those we encounter here on Earth. They highlight this by the example of a course for beginners in general relativity, taught at CalTech in the autumn of 1985, after the first phase of work stimulated by Sagan’s enquiry, but before any of this was common knowledge, even among relativists. The students involved were not taught anything specific about wormholes, but they were taught to explore the physical meaning of spacetime metrics. In their exam, they were set a question which led them, step by step, through the mathematical description of the metric corresponding to a wormhole. “It was startling,” said Morris and Thorne, “to see how hidebound were the students’ imaginations. Most could decipher detailed properties of the metric, but very few actually recognised that it represents a traversable wormhole connecting two different universes.”

For those with less hidebound imaginations, there are two remaining problems — to find a way to make a wormhole large enough for people (and spaceships) to travel through, and to keep the exotic matter out of contact with any such spacefarers. Any prospect of building such a device is far beyond our present capabilities. But, as Morris and Thorne stress, it is not impossible and “we correspondingly cannot now rule out traversable wormholes.” It seems to me that there’s an analogy here that sets the work of such dreamers as Thorne and Visser in a context that is both helpful and intriguing. Almost exactly 500 years ago, Leonardo da Vinci speculated about the possibility of flying machines. He designed both helicopters and aircraft with wings, and modern aeronautical engineers say that aircraft built to his designs probably could have flown if Leonardo had had modern engines with which to power them — even though there was no way in which any engineer of his time could have constructed a powered flying machine capable of carrying a human up into the air. Leonardo could not even dream about the possibilities of jet engines and routine passenger flights at supersonic speeds. Yet Concorde and the jumbo jets operate on the same basic physical principles as the flying machines he designed. In just half a millennium, all his wildest dreams have not only come true, but been surpassed. It might take even more than half a millennium for designs for a traversable wormhole to leave the drawing board; but the laws of physics say that it is possible — and as Sagan speculates, something like it may already have been done by a civilization more advanced than our own.

Hyperluminal Spaceship, Tachyons and Time Travel

Einsteins theory of relativity suggests that none can have hyperluminal speed. Negative mass or tachyon are the particles which always travel at superluminal speed and had a negative time frame means time run backward for them. What if a spaceship is travelling at hyper light speed? Would time run backward for that space ship? These are questions which are to be solved here by Earnst L Wall.

By Earnst L Wall

To depart somewhat from a pure state machine argument for a moment, we will consider a more general discussion of the argument that an object that moves faster than the speed of light would experience time reversal.  For example,  the space ship Enterprise, in moving away from Earth at hyperluminal velocities, would overtake the light that was emitted by events that occurred while it was still on the earth.  It would then see the events unfold in reverse time order as it progressed on its path.  This phenomena would be, in effect, a review of the record of a portion of the Earth=s history in the same manner that one views a sequence of events on a VCR as the tape is run backwards.  But this does not mean that the hyperluminal spacecraft or the universe is actually going backwards in time anymore than a viewer watching the VCR running in reverse is moving backwards in time.

Further, it must be asked what would happen to the universe itself under these circumstances.  To illustrate this, suppose a colony were established on Neptune.  Knowing the distance to Neptune, it would be trivial, even with today’s technology, to synchronize the clocks on Earth and Neptune so that they kept the same absolute time to within microseconds or better.  Next, suppose that the Enterprise left Earth at a hyperluminal velocity for a trip to Neptune.  When the crew and passengers of the Enterprise arrive at Neptune, say 3 minutes later in Earth time, it is unlikely that the clocks on Neptune would be particularly awed or even impressed by the arrival of the travelers. When the Enterprise arrives at Neptune, it would get there 3 minutes later in terms of the time as measured on both Neptune and Earth, regardless of how long its internal clocks indicated that the trip was.  Neither the Enterprise nor its passengers would have moved backwards in time as measured on earth or Neptune.

The hands of a clock inside the Enterprise, as simulated by a state machine, would not be compelled to reverse themselves just because it is moving at a hyperluminal velocity.  This is because the universal state machine is still increasing its time count, not reversing it.  Nor would any molecule that is not in, or near the trajectory of the space ship, be affected insofar as time is concerned, provided it does not actually collide with the space ship.

In the scheme above, reverse time travel will not occur merely because an object is traveling at hyperluminal velocities.  Depending on the details of the simulation, hyperluminal travel may cause the local time sequencing to slow down, but a simulated, aging movie queen who is traveling in a hyperluminal spacecraft will not regain her lost youth.  Simulated infants will not reenter their mother’s wombs.  Simulated dinosaurs will not be made to reappear.  A simulated hyperluminal spacecraft cannot go back in time retrieve objects and bring them back to the present.  Nor would any of the objects in the real universe go backward in time as a result of the passage of the hyperluminal spacecraft.

The mere hyperluminal transmission of information or signals from point to point, nor objects traveling at hyperluminal velocities from point to point, does not cause a  change in the direction of the time count at the point of departure nor at the point of arrival of these hyperluminal entities, nor at any point in between.

Based on concepts derived from modern computer science, we have developed a new method of studying the flow of time.  It is different from the classical statistical mechanical method of viewing continuous time flow in that we have described a hypothetical simulation of the universe by means of a gigantic digital state machine implemented in a gigantic computer.  This machine has the capability of mirroring the general  non-deterministic, microscopic behavior of the real universe

Based on these concepts, we have developed a new definition of absolute time as a measure of the count of discrete states of the universe that occurred from the beginning of the universe to some later time that might be under consideration.   In the real universe, we would use a high energy gamma ray as a clock to time the states, these states being determined by regular measurements of an object’s parameters by analog-to-digital samples taken at the clock frequency.

And based on this definition of time, it is clear that, without the physical universe to regularly change state, time has no meaning whatsoever.  That is, matter in the physical universe is necessary for time to exist. In empty space, or an eternal void, time would have utterly no meaning

This definition of time and its use in the simulation has permitted us to explore the nature of time flow in a statistical, non-determinate universe. This exploration included a consideration of the possibility of reverse time travel.  But by using the concept of a digital state machine as the basis of a thought experiment, we show clearly that to move backward in time, you would have to reverse the state count on the universal clock, which would have the effect of reversing the velocity of the objects. But this velocity includes the not only the velocity of the individual objects, but the composite velocities of all objects composing a macroscopic body. As a result, this macroscopic body would also reverse its velocity, providing the state was specified with sufficient precision.

But if you merely counted backward and obtained a reversal of motion, at best you could only move back to some probable past because of the indeterminate nature of the process.  You could not go back to some exact point in the past that is exactly the way it was.   In fact, after a short time, the process would be come so random that there would be no real visit to the past.  A traveler would be unable to determine if he was going back in time, or forward in time.  Entropy would continue to increase.

But doing even this in the real universe, of course, would present a problem because you would need naturally occurring, synchronized, discrete states (outside of quantized states, which are random and not universally synchronized).  You would need to be able to control a universal clock that counts these transitions, and further, cause it to go back to previous states simultaneously over the entire universe.   Modern physics has not found evidence of naturally occurring universal synchronized states, nor such an object as a naturally occurring clock that controls them.  And even if the clock were found, causing the clock to reverse the state transition sequence would be rather difficult.

Without these capabilities, it would seem impossible to envision time reversal by means of rewinding the universe.  This would not seem to be a possibility even in a microscopic portion of the universe, let alone time reversal over the entire universe.

But aside from those difficulties, if you wished to go back to an exact point in the past, the randomness of time travel by rewind requires need an alternative to rewinding the universe.  This is true for the simulated universe, and a hypothetical rewind of the real universe.  Therefore, the only way to visit an exact point in the past is to have a record of the entire past set of all states of the universe, from the point in the past that you wish to visit onward to the present.  This record must be stored somewhere, and a means of accessing this record, visiting it, becoming assimilated in it, and then allowing time to move forward from there must be available.  And, while all of this is happening in the past, the traveler’s departure point at the present state count, or time, must mover forward in time while the traveler takes his journey.

Even jumping back in time because of a wormhole transit would require that a record of the past be stored somewhere.  And, of course, the wormhole would need the technology to access these records, to place the traveler into the record and then to allow him to be assimilated there.  This would seem to be a rather difficult problem.

This then, is the problem with time travel to an exact point in the past in the real universe.  Where would the records be stored?  How would you access them in order just to read them?  And even more difficult, how would you be able to enter this record of the universe, become assimilated into this time period, and then and have your body begin to move forward in time.  At a very minimum our time traveler would have to have answers to these questions.

Still another conundrum is how the copy of the past universe would merge with the real universe at the traveler’s point of departure.  And then, if he had caused any changes that affected his departure point, they would have to be incorporated into that part of the universal record that is the future from his point of departure, and these changes would then have to be propagated forward to the real universe itself and incorporated into it.  This is assuming that the record is separate from the universe itself.

But if this hypothetical record of the universe were part of the universe itself,  or even the universe itself, then that would imply that all states of the entire universe, past, present, and future, exist in that record. This would further imply that we, as macroscopic objects in the universe, have no free will and are merely stepped along from state to state, and are condemned to carry out actions that we have no control over whatsoever.

In such a universe, if our traveler had access to the record, he might be able to travel in time.  But he were to be able to alter the record and affect the subsequent flow of time, he would have to have free will, which would seem to contradict the condition described above.  We obviously would be presented with endless recursive sequences that defy rationality in all of the above.

This is all interesting philosophy, but it seems to be improbable physics.

Therefore, in a real universe, and based on our present knowledge of physics, it would seem that time travel is highly unlikely, if not downright impossible.

We do not deny the usefulness of time reversal as a mathematical artifact in the calculation of subatomic particle phenomena.  However,  it does not seem possible even for particles to actually go backwards in time and influence the past and cause consequential changes to the present.

Further, there is no reason to believe that exceeding the speed of light would cause time reversal in either an individual particle or in a macroscopic body.  Therefore, any objections to tachyon models that are based merely on causality considerations have little merit.

For the sake of completeness, it should be commented that the construction of a computer that would accomplish the above feats exactly would require that the computer itself be part of the state machine. This could add some rather interesting problems in recursion that should be of interest to computer scientists.  And, it is obvious that the construction of such a machine would be rather substantial boon to the semiconductor industry.

We already know from classical statistical mechanics that increasing entropy dictates that the arrow of time can only move in the forward direction .  We have not only reaffirmed this principle here, but have gone considerably beyond it. These concepts would be extremely difficult, if not impossible, to develop with an analog, or continuous statistical mechanical model of the universe.

We have defined time on the basis of a state count based on the fastest changing object in the universe.  But it is interesting to note that modern day time is based on photons from atomic transitions, and is no longer based on the motion of the earth.  Conceptually, however, it is still an extension of earth based time.

But finally, history is filled with instances of individuals who have stated that various phenomena are impossible, only later to be proven wrong, and even ridiculous. Most of the technology that we take for granted today would have been thought to be impossible several hundred years ago, and some of it would have been thought impossible only decades ago.  Therefore, it is emphasized here that we do not say that time travel is absolutely impossible.  We will merely take a rather weak stance on the matter and simply say that, based on physics as we know it today, there are some substantial difficulties that must be overcome before time travel becomes a reality.

Wormhole Warfare

By Robin Hanson

Technology changes the face of war. How would wormholes change war?

First, I’d expect defensive redundant booby-trapping of wormholes connecting potential enemy regions. Wormholes are the major transportation and communication channels; folks would invade along them if they could, so if limited in number they would be choke points – fortified against the most advanced invaders one could imagine.

Second, I’d expect military powers to try and control the entry of wormholes into their territory. If war breaks out, and the enemy has lots of wormholes behind your lines, close to targets and to raw materials, they can see what you’re doing and hit you fast. Bad news.

So I’d expect mainly bit streams to go through official wormholes; wormhole passage through wormholes would be tightly controlled, if they could manage it. And even bit streams can be dangerous; once aliens had connected up from across the universe, it might be most unwise to run unknown complex software from distant lands, as in Vinge’s “A Fire Upon the Deep”.

Regions with too many unknown wormholes in it might be dead zones, the sort of place no one could plausibly defend because attack could literally come from anywhere in great force. Neighboring regions might want to explode a quasar there or something to try and limit the threat of invasion from that direction.

Third, regions which, for the same “empire” or “universal” time, are at an earlier cosmological co-moving time would have strong military advantages. Say war breaks out at some empire time, and existing wormholes are sealed against attack. In this case the “earlier” region can send a cloud of wormholes toward their enemies the old-fashioned way, on rockets, to arrive rather soon in empire time. If any of the wormhole cloud gets through, a beachhead is formed for attack. Similar holes sent the other way would likely be quickly destroyed by threatening to form causal loops, and even if they didn’t they would take a *very* long time in empire years to get there.

If warring regions have empire times at similar cosmological times, as in the meeting aliens example, and wormhole access is denied, and technological/economic growth is at all in force, then defenders have a huge advantage cause they can just wait and grow, as Mike Price commented in his paper.

So the major links between and within civilizations might be under tight military control, new additions to the network subject to military veto, and regions at the geographic center of an empire having a strong military advantage. “Empire” doesn’t sound so far-fetched in this case.

Review On Some Most Exotic Propulsion Technologies

Talking about alien technologies! However why to forget miraculous exotic propulsion which are supported by our theory but aliens made it practical in our sci-fi movies and novel? WeirdSciences is known for its space dimension theoretical approach and for extraterrestrial life. Here are some new exotic propulsion methodologies and some of them, probably you have never heard of. Be ready for the tour!!

Emergency Warp Power Cell

In desperate situations, it may become necessary for a starship to jettison its warp core to prevent it from being destroyed by a massive antimatter or zero-point explosion. Though at the time it meant immediate survival for the ship and crew, but it also means that without a power generator for the warp drive, a starship will be stranded years to centuries away from anything that can be considered a safe harbour. A rare, yet possible situation can occur if the warp core has been declared irretrievable, a rescue ship cannot reach the ship in distress (SID) in time for some reason, and the ship is trapped in the massive void between stars. This situation will spell disaster for the ship and crew. But a new piece of equipment will replace the lost warp core in this emergency situation and allow the ship to reach a safe haven. The Emergency Warp Power Cell (EWPC).

The EWPC in simplest terms is a large scale matter/antimatter fuel cell, similar to those used on photon torpedo propulsion systems. The fuel cell is designed to be placed on the existing warp power conduits where the missing warp core use to be. It travels up the warp core shaft and is anchored by the same tether beams that held the warp core in place. A typical fuel cell works by injecting antiprotons directly into the plasma power conduits. Energy is conducted in the power conduit by the high speed motion of the plasma particles. As in billiards, the energy is conducted when a moving plasma particle hits a second plasma particle and transfers its kinetic energy to the second particle, the second to the third, third to the fourth, and so on. The antiprotons reacting with the plasma initiates the high kinetic energy transfer. But the EWPC uses a specialized plasma tank for the antimatter reaction to take place without risking unnecessary damage to the warp power conduits. Plasma for the EWPC is provided by ionizing deuterium from the ships storage tanks.

Though the fuel cell is somewhat simpler in design, an antimatter reactor that uses dilithium is far more fuel efficient because dilithium acts as a focusing lens for the annihilation reaction. The fuel cell uses a series of high intensity magnetic containment fields to propel the plasma in the desired direction. These containment fields uses a considerable amounts of power to use, and can lead the fuel cell to generate considerable amounts of heat. Therefore 20% of the fuel cell is a cryogenic cooling system to keep the fuel cell within safe thermal limits.

The antiprotons for the fuel cell doesn’t come from conventional antimatter, but from a stable heavy isotope, specifically element 115. Element 115 is used because when it’s bombarded with high energy protons, the element transmutes into element 116, which is unstable, and decays releasing antimatter particles, which are easily collected. Recycling element 115 is by the use of an atomic resequencer, a component found in industrial replicators. This antimatter fuel source is typically used by the Zeta Reticulans, the aliens species humans use to call the Greys.

Since the fuel cell generates only so much power, the main energizers are disconnected from the warp power matrix, so all the fuel cell’s energy is transferred to the warp nacelles. Critical systems, such as life support, are switched to auxiliary power. Despite its low power output, the fuel cell can generate enough power to slowly accelerate and maintain Warp 4 for a starship and allow it to travel a distance of 10 light-years, depending on the starship. Even though 10 light-years is not a very far distance to travel by early 25th century standards, 10 light-years could allow a starship to reach a star system with a habitable or adaptable planet. This will allow the crew to survive until a rescue ship arrives.[Ref]

Negative Mass Warp Drive

By the use of a subspace displacement field, starships are capable of travelling faster than the speed of light by catastrophically collapsing the space in front of the ship and expanding the space behind it. But some particles, such as tachyons, are capable of travelling faster than light and exist within normal space because these particles are composed of negative mass.

Whereas normal mass, which are found in planets and starships, generates a depression or trough in the fabric of the space time continuum, prohibiting faster than light travel, negative mass generates a crest in the fabric of the space time continuum, allowing faster than light travel. By making use of this field, Starfleet develops a new mode of propulsion known as the Negative Mass Warp Drive, or Negative Mass Drive. The warp field coils within the nacelles generates a subspace field whose properties “reflects” gravitons in a similar fashion to a mirror reflecting light. The reflected gravitons causes a phase shift in the natural gravitational field of the ship allowing the ship to attain negative mass properties. Resulting in faster than light travel.

The negative mass drive has an advantage over conventional warp drive. Though initially slower than warp drive, with the use of the ship’s impulse engines the negative mass drive can constantly accelerate the ship until its fuel supply is exhausted. Whereas conventional warp has a maximum warp limit. This is because warp drive can only collapse and expand space at a certain rate which leaves this limit. Negative mass drive allows the ship to “exist” beyond the light barrier, and is still subjected to basic physics, such as acceleration.

By reason of power conservation, the engines only reflects 51% of the gravitons generated by the ships mass. Because reflecting 50% of the gravitons will cancel out the effects of the remaining 50% of the un-reflected gravitons, neutralizing the ships space warp. The added 1% allows the ship to travel faster than light. It would make no sense to use 100% power where it is not needed. Unlike quantum mechanics, whose nature is very turbulent, relative physics is very laminar, or smooth, which substantially reduces the risk of a ship being sheared apart by the jump to warp speeds with the negative mass drive. The engines are powered by standard matter/antimatter reactions. Since the engines themselves only requires a certain amount of power, more specifically equivalent to warp 4, the remaining antimatter power is then channelled to the impulse engines to allow the ship to accelerate at a much greater rate. In many cases, ships whose warp power conduits and the impulse engines are almost literally metres apart the ships that normally have the new engines installed. Sometimes even ships whose warp and impulse drives share a common power source, such as an intermix chamber. This results in reduced refit time and reduced modification and redesign of the ships. Such ships include the Intrepid class, Defiant class, Akira class, as well as the retired Enterprise class.

With this new drive, there has been some heated debate as to what happens when the instant the ship jumps to warp. For that instant, the ship attains exactly 50% space inversion, which results in 100% neutral space warp. Most scientists and engineers agree that the ship travels at light speed, warp factor 1. But there are some maverick scientists who believe that for an infinitely small amount of time, the ship attains warp factor 10, infinite velocities. Because of the advantage of constant acceleration, some speculate that within several months of acceleration with the negative mass drive can allow the ship to reach speeds equivalent to transwarp speeds or slipstream velocities.

Singularity Propulsion

  • Theory

A quantum singularity is usually a naturally occurring phenomenon. Also called a black hole, a quantum singularity is usually a result of a collapsed big star, such as a red supergiant. The quantum singularity emits a large amount of gravitons, rendering escape from the deep event horizon impossible at or below c. Space around the singularity is folded so that imaginary gravity lines (as presented in some diagrams) follow equation f(r)=1/r (a two-quadrant cutaway diagram of the black hole would have f(r)=-|1/r|) up to a point, though I’m not going to get into very specific details.

There is no evidence of what is actually inside a black hole, only that most black holes have “donut singularities.” These singularities are ring-shaped and it is possible that they might connect to another black hole, forming a bridge between to places, two time frames, two universes, or a combination of the three. When the two join, one of them has to become a white hole (the exact opposite of a black hole). Using an analogy, instead of being “the universe’s vacuum cleaner,” it would be like a blow dryer. A white hole expels everything and nothing can get in, only get out. When a black hole and another black hole or a white hole connect, they form a wormhole, although it is only one way, toward the direction of the white hole. The “wall” of the bridge, however, is too narrow for an object bigger than 10 atomic masses (that is including molecules and atoms). So, if there is some kind of an antigravity field (it is probably an antigraviton particle emitter), that would keep the bridge from disconnecting and would keep it wide enough for a vessel to get through. A wormhole, unlike a black hole-to-white hole bridge, serves as a two-way gateway to a different time, place, or universe. A wormhole can be destabilized with antiverterons and antitachyon pulses.

There are three ways to get to a destination (from point A to point B). The “usual” way, in which line AB is distance d. The way of the “worm,” in which line AB is distance d-x (where x is the length of the wormhole and d-x<d). And there is also the “0” way, where space is bent so that the two points, A and B merge to form a new point, C, in which line AB has a distance of zero. Singularity propulsion takes the third road.

  • Application

Singularity propulsion is one of the first FTL (Faster Than Light) propulsions that a typical civilization would attempt, since this theory usually is the first that a civilization comes up with (think of Stephen Hawking on Earth). The attempt would usually fail, as the civilization does not have the technology to detect very small, to them yet unthinkable particles. That civilization also thinks that a matter-antimatter meeting can tear up the fabric of space, but that is not the way to do that.

In order to commence the singularity propulsion, a ship needs to be in outer space, preferably outside a solar system, in case of an accident. One would also need a graviton, tachyon, verteron, and chroniton generator and their appropriate counterparts to shut the rift down. First, one needs to launch a graviton generator outside of the ship, about 1000km away. Once the generator starts up, one would need to monitor the gravitons carefully, as the emissions tend to increase over time. Once the generator emits a graviton field of 100S (1S = 1 solar mass = Sol), a short burst of antigravitons would make the emissions slow down significantly. A moment later one would need to emit tachyons and verterons simultaneously. Together, tachyons and verterons would open a stable rift. They will connect the point of destination to the point of origin, so that the distance between them is 0. If only verterons are used, the rift will evaporate, since the rift itself would emit verterons. If only tachyons are used, the rift will be stable, but only for a short while. In addition, the generator would burn out. Chronitons do not have to be used unless the travelers want to end up at a some random time in our universe. Chronitons would tie the point of destination’s time to the point of origin’s. All particles, however, have to be of a certain frequency and each sector and time in the universe has a different signature. One can even attempt time travel with this technology.

Some species do have this technology, such as the Q and the Bajoran wormhole prophets. As one gets closer to a singularity, one can obtain god-like powers because that one can exist everywhere, in every universe, time and place, since the 4 dimensions are useless on (or in) the singularity, one can assume that these “godly” beings actually live in the wormhole. They might have lived there their whole lives (they were implanted there) or a ship might have been caught and somehow there were survivors for whom linear time has no meaning. Some species are experienced with linear time enough to understand it, such as Q. Admiral Janeway (“Endgame”) attempted and succeeded in opening a “0” rift linking year 2404 in the Alpha Quadrant to the year 2378 in the Delta Quadrant. The rift was created by using only tachyons and chronitons, so her device burned out.

Tachyon Drive

Tachyons have been talked about, used, and abused in many ways from the conventional, such as communications, to the absurd, such as creating an anti-time anomaly. In 2504, it is the first time that this common faster-than-light particle is used for propulsion. It was first truly theorized in 2371 when commander Benjamin Sisko and his son Jake recreated a Bajoran solar sailing ship which legend says was able to reach Cardassian space. This ship was caught in tachyon eddies in the Denorios belt pushing it to warp speeds due to the unique design of the sailing ship.

In theory tachyons are capable of traveling faster than light speed and being able to exist in normal space is because this particle has negative mass. Conventional mass, such as what makes up the matter in starships, cannot travel at light speed, let alone faster than light speeds. Light itself has no mass and therefore can travel at light. However negative mass, which is on the other side of the light scale, can travel faster than light.

Some have speculated that by collecting and storing enough tachyons, this will neutralize the ships natural mass to allow it to travel faster than the speed of light. However, in order to neutralize enough mass for warp 1 travel, light speed travel, a 4.5 million metric tonne Galaxy class starship would need to store -4.5 million metric tonnes of tachyon particles. And even more for it to travel beyond warp 1. This is even more difficult since tachyons cannot simply be stored, in essence, the same way deuterium can be stored.

The dream of a tachyon drive however had not died. The old M2P2 drive of the 21st century, like those used on the old DY starships, uses a solar powered electromagnetic field to gather ionized gases from the sun, and have the solar winds push the newly formed plasma field along with the ship like an electromagnetic sail. A spatial distortion field would be used in a similar fashion to the M2P2 drive to collect enough of the free tachyon particles in the field, concentrate them, and use their negative mass field to cancel out the ships mass as well as allow the ship to travel faster than the speed of light.The tachyon drive would be simpler than conventional warp drive because it doesn’t have to maintain a proper balance between the 2 warp nacelles and won’t need to cause a deliberate imbalance in the warp field to steer the ship at warp. Steering with tachyon drive can be done by the impulse engines. In relative size, it would be more conceivable for small or medium size ships, from shuttle pods to no bigger than the Phoenix class starships, to use the tachyon drives. But it is possible for the heaviest federations ships, such as the Galaxy class and Pelagic class ships, to use this drive.

Travel through Hyperspace

Hyperspace, what is that anyway? Generally, hyperspace is a space of higher dimensions, meaning dimensions beyond the three known dimensions of space and the one of time. These “higher” dimensions are outside the limits of our perception and, for the most part, of our understanding too. Nevertheless, the principle may be explained with a simple example: Let us assume that we are small, two-dimensional worms, and that we are crawling on an infinitely large sheet of paper. We live in peace, and everything is as usual. But one day, a smart physicist devises the idea that there might be a third dimension aside from the two well-known dimensions of our sheet of paper. If he were able to climb into a rocket and leave for the third dimension, he would disappear in the eyes of his fellow worms, as soon as he would leave the paper surface. He could then, without any reasonable explanation, reappear at any other place of the surface. It is quite similar with our hyperspace. But more about that later.

For a long time, hyperspace travel has been of little interest to the spacecraft engineers of Starfleet until 2381, just one year ago, Federation archaeologists discovered the remains of a long extinguished civilization in the Ventana system at the edge of the Beta Quadrant. After deciphering the millennia old databanks, scientists faced an incredible amount of data about hyperspace and ways of taking advantage of it. Apparently, they have discovered a civilization that had unveiled one of the last secrets of our universe, the physics of higher dimensions.


The following paragraph illustrates the course of a travel through hyperspace, as it could be done on Federation starships. To get into hyperspace in the first place, a “window” needs to be opened to the higher dimensions, virtually lifting the ship into hyperspace. The creation of such a gate would be very energy consuming and complex. Actually, that much energy is required, as cannot be provided by any available or known source to date. But in order to circumvent this problem, engineers may apply a little trick. They “borrow” energy from the vacuum. We owe our thanks to a variant of Heisenberg’s Uncertainty Principle. The Uncertainty Principle doesn’t only apply to the location and velocity of a particle, but also to its energy and the time over which it has this energy. The formula is: h ~ E x t (the Planck constant is approximately the energy times the time). Thus, an amount of energy E must be “paid back” (transferred into the vacuum) after a time t. This means that the more energy is borrowed, the sooner it must be given back in order not to violate energy conservation. But this circumstance alone would not suffice to enable hyperspace travel, as the window would collapse as soon as the energy would be given back, so the net result would be zero.

The second lucky circumstance is that the creation of a gate would require much more galactic energy than its maintenance, for which a normal matter/antimatter reaction may be sufficient. Therefore, the sequence of events may be as follows: We borrow and amount of energy, Ev. With the energy from the reactor, Er , plus Ev we obtain the total energy Et that is necessary to build the gate: Ev + Er = Et. Then we return the energy Ev in a time t = h/Ev, and what remains is the energy Er from the reactor, just sufficient to maintain the window. This way, we achieve a maximum effect with a minimum amount of “true” energy.


So far for the theory, now for the practice: The starship or an outpost launches, with a conventional photon torpedo launcher, a beacon with an M/AM reactor and a so-called collector, capable of borrowing energy from the vacuum. This beacon builds up the window in the above explained fashion and keeps it up for about one minute. But the starship may not yet enter into hyperspace. This is because the higher-dimensional space is being almost completely evacuated as soon as the gate is opened. No one can yet explain why this is so. Anyway, this would force all starships already in hyperspace to leave it here and now, irrespective of their planned destination. To avoid this effect, a certain neutrino imprint is applied to the window during its creation, each ship having its individual imprint. But why neutrinos of all particles? The reason is that neutrinos interact in a special way with the gate and are able to leave something like a finger print. Now, only starships with a matching print may pass the gate. Without this technology, only ships going from one system into the same system could be in hyperspace at the same time. The imprint also plays an important role when leaving hyperspace. More about that later.

As soon as we are in hyperspace, the second step is taken. In every important system (e.g. Sol) gravitation wave emitters and receivers have been positioned, as well as an array of the above described beacons. These emitters permanently generate gravitational waves of a certain frequency, as it is the only type of waves capable of propagating in hyperspace (light, for instance, is not). The starship in hyperspace receives these frequencies of the systems and replies with the specific frequency of the destination system. In addition, it transmits, coded in gravitational waves, a data package that is received in all systems. As this is received by the receiver of the destination system, a window to hyperspace may be created as described above, with exactly the neutrino imprint of the starship. The ship is hurled out of subspace and finds itself at the destination. One thing remains to be mentioned: The starship in hyperspace may open a gate itself, but the place of exit would be completely coincidental. It could emerge anywhere in the universe.

Sublight Travel And Galactic Exploration Through Wormholes

Though it seems impossible to colonize galaxy at sub-light speed but without FTL travel we can still colonise the universe at sub-light velocities[ using self replicating probes and Bioprograms which I’ve discussed recently], but the resulting colonies are separated from each other by the vastness of interstellar space. In the past trading empires have coped with time delays on commerce routes of the order of a few years at most. This suggests that economic zones would find it difficult to encompass more than one star system. Travelling beyond this would require significant re-orientation upon return, catching up with cultural changes etc. It’s unlikely people would routinely travel much beyond this and return.

Nanotechnology only exacerbates the situation. We expect full- nanotech, uploading, AIs etc to arrive before interstellar travel becomes practical. Assume we keep the same dimensions for our bodies and brains as at the moment. Once we are uploaded onto a decent nanotech platform our mental speeds can be expected to exceed our present rates by the same factor as electrical impulses exceed the speed of our neurochemical impulses – about a million. Subjective time would speed up by this factor. Taking a couple of subjective-years as the limit beyond which people would be reluctant to routinely travel this defines the size of a typical trade zone / culture as not exceeding a couple of light minutes. Even single stellar systems would be unable to form a single culture/trade zone. The closest planet then would seem further away than the nearest star today.

With full nanotech there will be little need to transfer matter. Trade in the distant future is likely to consist of mostly information. Design plans for new products, assembled on receipt. Patterns of uploaded consciousness of intrepid travellers. Gossip and news. But with communication delays to Alpha Centauri of the order of millions of subjective years two-way exchanges are difficult to imagine – even when we are enjoying unlimited life spans.

[Image credit: TheMarginal]

Communication and exploration would be, essentially, a one-way process. If you had a yen to travel to the Alpha Centauri you could. Squirt your encoded engrams down an interstellar modem and arrive decode at Alpha. Assuming the receiving station hasn’t shut in the intervening millions of years of subjective cultural change. You could leave a copy behind as redundancy or if you wanted to explore both regions, but I suspect many of us will not find this completely satisfactory. The speed of light barrier would limit us and cramp our style us much more than it does at present.

A wormhole could be constructed, by confining exotic matter to narrow regions to form the edges of three-dimensional space like a cube. The faces of the cube would resemble mirrors, except that the image is of the view from the other end of the wormhole. Although there is only one cube of material, it appears at two locations to the external observer. The cube links two ‘ends’ of a wormhole together. A traveller, avoiding the edges and crossing through a face of one of the cubes, experiences no stresses and emerges from the corresponding face of the other cube. The cube has no interior but merely facilitates passage from ‘one’ cube to the ‘other’.

The exotic nature of the edge material requires negative energy density and tension/pressure. But the laws of physics do not forbid such materials. The energy density of the vacuum may be negative, as is the Casimir field between two narrow conductors. Negative pressure fields, according to standard astrophysics, drove the expansion of the universe during its ‘inflationary’ phase. Cosmic string (another astrophysical speculation) has negative tension. The mass of negative energy the wormhole needs is just the amount to form a black hole if it were positive, normal energy. A traversable wormhole can be thought of as the negative energy counterpart to a black hole, and so justifies the appellation ‘white’ hole. The amount of negative energy required for a traversable wormhole scales with the linear dimensions of the wormhole mouth. A one meter cube entrance requires a negative mass of roughly 10^27 kg.

Wormholes can be regarded as communication channels with enormous bandwidth. The wormhole will collapse when the amount of mass passing through it approaches the same order as the amount of negative mass confined to its edges. According to some scientists information has a minimum energy of  kTlog2 associated with it. For 1- meter radius cube this implies a potential bandwidth of over 10^60 bits/sec. Even very small nano-scale wormholes have bandwidths of the order > 10^50 bits/sec. This suggests it will usually be more economic to squirt the design of an object down a channel rather than the object itself.

Construction of such cubes is, of course, far, far beyond our present day abilities. With AIs and nanotech combined we expect the limits on intelligences to be governed by physics, not biology. Our brains’ processing capacity lies somewhere between 10^15 – 10^18 bit/sec. A comparably sized nanoelectronic brain would have power of 10^32 – 10^36 bit/sec. Assuming a factor of million is lost for the speedup still leaves 8 – 12 orders of magnitude expansion in the complexity, or depth of thought, of our brains as we switch from biology to nanotechnology. So we should not assume construction and manipulation of the materials required will long remain beyond the grasp of future civilisations, populated by such super-intelligences. The remainder of the article will assume the mass production of wormholes is economically achievable. Wormholes enable travel from one mouth to the other. To travel to distant parts of the universe one wormhole end stays at home and the other is carted away, at sublight velocities, to the destination.

Problems begin when the distant wormhole end turns about and returns home. According to the twin paradox the traveller returns aged less than the stay-at-home twin (their clocks are no longer in step). Travelling through the wormhole from the stay-at-home end to the go- away-and-come-back end transports you forward in time. Travelling in the reverse direction transports you back in time. Wormholes allow time travel. This conclusion was realised soon after the first articles on traversable wormholes were published. Depending on your view of the plausibility of time travel this is either, if you believe time travel possible, very exciting or, if you scoff at time travel, proof that traversable wormhole can’t exist. No general consensus emerged in the pages of various physics journals as the subject was batted back and forth. Elaborate and very interesting papers reconciled time travel with quantum theory, whilst others (like Hawking ) proposed a Chronological Protection Conjecture[CPC], which says the Universe Shalt Not Allow Time Travel.

A space probe with a wormhole could be powered from base. The fuel is uploaded through the wormhole from base to the in-flight ship. There would an energetically very strong potential hill for the fuel to climb to reach the ship. For a ship moving at relativistic speeds most of the energy of the fuel would be lost in the climb. This suggests that the ship would be stripped to the bare minimum, just modern rockets are. [ref: Time Travel Research Center]

The probe remains in contact with the home base, throughout the trip. As a drop point approaches another wormhole plus deceleration rig would be loaded through to detach itself from the mother craft. Deceleration would likely be quicker and less expensive than acceleration because the daughter craft could brake itself against interstellar/galactic gas, dust and magnetic fields. For energy cost reasons it is not likely that transfer of colonists would begin until deceleration is complete.

The colonists transfer through this hole, whilst the main probe continues its outward voyage. One of the first activities of colonists would be to secure the connections with home by increasing the wormhole capacity and numbers. Transport of manufacturing plants, more wormholes etc would continue until local nanotech factories become locally more competitive than transport of finished product via wormholes. After this point the wormholes would be increasingly used for communications rather than materials transport.

An analogy with the cloud chamber spring to mind here. Charged particles are tracked through cloud chambers. Each particle is invisible, but its presence is deduced from the trail of growing droplets left behind. Similarly the space probe is all but invisible, lost in the immensity of the dark of space. The burgeoning colonies left behind mark its passage. The colonies send out further wormholes probes. From a distance the whole affair would resemble a growing 3-D snowflake.

Road, sea and air routes let commerce draw on the whole earth’s resources and the telecommunications highways keep us in contact with each other. Wormhole connections laid down by space probes enable a space-faring civilisation to remain a single economic entity, with all the social and material benefits that follow. Wormholes connections enable the region colonised to stay interconnected as civilisation expands through the universe.

Wormholes do have one major trick up their sleeves. We have seen that wormholes don’t permit time travel. But they do exhibit some very strange effects. Consider a colonist stepping through the home wormhole to transfer to the landing ship. Ship time and home time are running in synchronisation. If I wait 15 years at home after launch before stepping through then I appear at the travelling end at the point when the probe passes Andromeda. In crossing 2,250,000 light years of conventional space I travel 2,250,015 million years into the future. So, Wormholes could help us in colonization of galaxy and universe and may be possible we could colonize parallel universes[assuming they exists].

%d bloggers like this: