Living in A Simulation

By Rob Braynton

The above new video accompanies my blog entry from last August, on the subject of Simulism: the idea that we could be living within a gigantic virtual world, whether we are aware of it or not. Simulism.org is a wiki created by the Netherlands’ Ivo Jansch, and it brings together a number of interesting bits of information about (to quote from the wiki) “the possibility that our existence rests on an unimaginably complex n-dimensional k-state computer grid with rules governing the transition from one state to another”.

In my previous post on Simulism and the above video blog, I talk about several other shows which have showed Simulism-related concepts, including The Matrix, Inception, Star Trek: TNG’s holodeck, and a television show I had not come across before called Play. The Moviespage at the simulism wiki lists a number of other films, most of which are obviously science fiction. But I couldn’t resist adding a few of my favorite films to the list because (in my opinion) they present related concepts:

A Christmas Carol – could this 1843 Charles Dickens novella be the grand-daddy of Simulism? The ghosts conjure virtual worlds of time travel and alternate timelines that to Ebenezer Scrooge are completely real.

It’s a Wonderful Life – when George Bailey is shown an alternate version of the world as it would have been without him, can’t this be thought of as a simulation?

Groundhog Day – being trapped at a certain instant of time (6 am on Groundhog Day), and then being given the freedom to explore all the possible timelines that extend from that instant: is Phil Conners trapped in a simulation? The movie offers no explanation so we are left to imagine what could have been the cause of his predicament.

Brazil – since so much of this film is surreal, placed “somewhere in the twentieth century” according to the opening subtitle, it’s possible that the entire film is a virtual world, a simulation. Could Sam Lowry have woken up from the dream, Neo-style, at any moment in this movie? Since the world depicted in this film is unlike any version of the twentieth century you or I experienced, there are other “alternate history” discussions that could just as easily be related to this film, one of my all-time favorites.

The wikipedia article on alternate history presents some examples of stories exploring the “what would have the world been like if this rather than that had happened” as far back as two thousand years ago: ideas of parallel universe versions of our own observed universe are not as new as you might suspect! And if Information Equals Reality, then all of these examples of simulism, alternate histories, and parallel universes may not just be flights of imagination, but examples of the possibilities inherent in the underlying structures of our reality.

Enjoy the journey!

Rob Bryanton.

Superluminal Speed in Gases..!!

Scientists have apparently broken the universe’s speed limit. For generations, physicists believed there is nothing faster than light moving through a vacuum – a speed of 186,000 miles per second. But in an experiment in Princeton, N.J., physicists sent a pulse of laser light through cesium vapor so quickly that it left the chamber before it had even finished entering. The pulse traveled 310 times the distance it would have covered if the chamber had contained a vacuum.

This seems to contradict not only common sense, but also a bedrock principle of Albert Einstein’s theory of relativity, which sets the speed of light in a vacuum, about 186,000 miles per second, as the fastest that anything can go. But the findings–the long-awaited first clear evidence of faster-than-light motion–are “not at odds with Einstein,” said Lijun Wang, who with colleagues at the NEC Research Institute in Princeton, N.J., report their results in today’s issue of the journal Nature.

“However,” Wang said, “our experiment does show that the generally held misconception that ‘nothing can move faster than the speed of light’ is wrong.” Nothing with mass can exceed the light-speed limit. But physicists now believe that a pulse of light–which is a group of massless individual waves–can.

To demonstrate that, the researchers created a carefully doctored vapor of laser-irradiated atoms that twist, squeeze and ultimately boost the speed of light waves in such abnormal ways that a pulse shoots through the vapor in about 1/300th the time it would take the pulse to go the same distance in a vacuum.

As a general rule, light travels more slowly in any medium more dense than a vacuum (which, by definition, has no density at all). For example, in water, light travels at about three-fourths its vacuum speed; in glass, it’s around two-thirds. The ratio between the speed of light in a vacuum and its speed in a material is called the refractive index. The index can be changed slightly by altering the chemical or physical structure of the medium. Ordinary glass has a refractive index around 1.5. But by adding a bit of lead, it rises to 1.6. The slower speed, and greater bending, of light waves accounts for the more sprightly sparkle of lead crystal glass.

The NEC researchers achieved the opposite effect, creating a gaseous medium that, when manipulated with lasers, exhibits a sudden and precipitous drop in refractive index, Wang said, speeding up the passage of a pulse of light. The team used a 2.5-inch-long chamber filled with a vapor of cesium, a metallic element with a goldish color. They then trained several laser beams on the atoms, putting them in a stable but highly unnatural state.

In that condition, a pulse of light or “wave packet” (a cluster made up of many separate interconnected waves of different frequencies) is drastically reconfigured as it passes through the vapor. Some of the component waves are stretched out, others compressed. Yet at the end of the chamber, they recombine and reinforce one another to form exactly the same shape as the original pulse, Wang said. “It’s called re-phasing.”

The key finding is that the reconstituted pulse re-forms before the original intact pulse could have gotten there by simply traveling though empty space. That is, the peak of the pulse is, in effect, extended forward in time. As a result, detectors attached to the beginning and end of the vapor chamber show that the peak of the exiting pulse leaves the chamber about 62 billionths of a second before the peak of the initial pulse finishes going in.That is not the way things usually work. Ordinarily, when sunlight–which, like the pulse in the experiment, is a combination of many different frequencies–passes through a glass prism, the prism disperses the white light’s components.

Illustration of wavefronts in the context of S...

Image via Wikipedia

This happens because each frequency moves at a different speed in glass, smearing out the original light beam. Blue is slowed the most, and thus deflected the farthest; red travels fastest and is bent the least. That phenomenon produces the familiar rainbow spectrum.

But the NEC team’s laser-zapped cesium vapor produces the opposite outcome. It bends red more than blue in a process called “anomalous dispersion,” causing an unusual reshuffling of the relationships among the various component light waves. That’s what causes the accelerated re-formation of the pulse, and hence the speed-up. In theory, the work might eventually lead to dramatic improvements in optical transmission rates. “There’s a lot of excitement in the field now,” said Steinberg. “People didn’t get into this area for the applications, but we all certainly hope that some applications can come out of it. It’s a gamble, and we just wait and see.”

[Source: Time Travel Research Centre]

Can the Vacuum Be Engineered for Space Flight Applications? Overview of Theory and Experiments

A Feynman diagram showing the radiation of a g...

Image via Wikipedia

By H. E. Puthoff

Quantum theory predicts, and experiments verify, that empty space (the vacuum) contains an enormous residual background energy known as zero-point energy (ZPE). Originally thought to be of significance only for such esoteric concerns as small perturbations to atomic emission processes, it is now known to play a role in large-scale phenomena of interest to technologists as well, such as the inhibition of spontaneous emission, the generation of short-range attractive forces (e.g., the Casimir force), and the possibility of accounting for sonoluminescence phenomena. ZPE topics of interest for spaceflight applications range from fundamental issues (where does inertia come from, can it be controlled?), through laboratory attempts toextract useful energy from vacuum fluctuations (can the ZPE be “mined” for practical use?), to scientifically grounded extrapolations concerning “engineering the vacuum” (is “warp-drive” space propulsion a scientific possibility?). Recent advances in research into the physics of the underlying  ZPE indicate the possibility of potential application in all these areas of interest.

The concept “engineering the vacuum” was first introduced by Nobel Laureate T. D. Lee  in his book Particle Physics and Introduction to Field Theory. As stated in Lee’s book: ” The experimental method to alter the properties of the vacuum may be called vacuum engineering…. If  indeed we are able to alter the vacuum, then we may encounter some new phenomena, totally unexpected.” Recent experiments have indeed shown this to be the case.

With regard  to space propulsion,  the question of engineering the vacuum can be put succinctly: ” Can empty space itself provide the solution?” Surprisingly enough, there are hints that potential help may in fact emerge quite literally out of the vacuum of so-called ” empty space.” Quantum theory  tells us that empty space is not truly empty, but rather is the seat of myriad energetic quantum processes  that  could have profound  implications  for  future  space travel. To understand these  implications it will serve us to review briefly the historical development of the scientific view of what constitutes empty space.

At the time of the Greek philosophers, Democritus argued that empty space was truly a void, otherwise there would not be room for the motion of atoms. Aristotle, on the other hand, argued equally forcefully that what appeared to be empty space was in fact a plenum (a background filled with substance), for did not heat and light travel from place to place as if carried by some kind of medium? The argument went back and forth through the centuries until finally codified by Maxwell’s  theory of  the  luminiferous  ether, a plenum  that  carried electromagnetic waves, including light, much as water carries waves across its surface. Attempts  to measure  the properties of  this ether, or to measure the Earth’s velocity through the ether (as in the Michelson-Morley experiment), however, met with failure. With the rise of special relativity which did not require reference to such an underlying substrate, Einstein in 1905 effectively banished the ether in favor of the concept that empty space constitutes a true void. Ten years later, however, Einstein’s own development of the general theory of relativity with its  concept of curved space and distorted geometry forced him to reverse his stand and opt for a richly-endowed plenum, under the new label spacetime metric.

It was the advent of modern quantum theory, however, that established the quantum vacuum, so-called empty space, as a very active place, with particlesarising and disappearing, a virtual plasma, and fields continuously fluctuatingabout their zero baseline values. The energy associated with such processes iscalled zero-point energy (ZPE), reflecting the fact that such activity remains even at absolute zero.

The Vacuum As A Potential Energy Source

At its most fundamental level, we now recognize that the quantum vacuum is an enormous reservoir of untapped energy, with energy densities conservatively estimated by Feynman and Hibbs to be on the order of nuclear energy densities or greater. Therefore, the question is, can the ZPE be “mined” for practical use? If so, it would constitute a virtually ubiquitous energy supply, a veritable “Holy Grail” energy source for space propulsion.

As utopian as such a possibility may seem, physicist Robert Forward at Hughes Research Laboratories demonstrated proof-of-principle in apaper,  “Extracting Electrical Energy from the Vacuum by Cohesion of Charged Foliated Conductors.” Forward’s approach exploited a phenomenon called the Casimir Effect, an attractive quantum force between closely-spaced metal plates, named for its discoverer, H. G. B. Casimir of Philips Laboratories in the Netherlands. The Casimir force, recently measured with high accuracy by S. K. Lamoreaux at the University of Washington, derives from partial shielding of  the interior region of the plates from the background zero-point fluctuations of the vacuum electromagnetic field. As shown by Los Alamos theorists Milonni  et  al., this shielding results in the plates being pushed together by the unbalanced ZPE radiation pressures. The result is a corollary conversion of vacuum energy to some other  form such as heat. Proof  that such a process violates neither energy nor  thermodynamic constraints can be found in a paper by a colleague and myself (Cole & Puthoff ) under the title “Extracting Energy and Heat from the Vacuum.”

Attempts to harness the Casimir and related effects for vacuum energy conversion are ongoing in our laboratory and elsewhere. The fact that its potential application to space propulsion has not gone unnoticed by the Air Force can be seen in its request for proposals for the FY-1986 Defense  SBIR Program. Under entry AF86-77, Air Force Rocket Propulsion  Laboratory (AFRPL), Topic: Non-Conventional Propulsion Concepts we f ind the statement: ” Bold,new non-conventional propulsion concepts are solicited…. The specific area sin which AFRPL is interested include…. (6) Esoteric  energy  sources  for propulsion  including  the zero point quantum dynamic energy of vacuum space.”

Several experimental formats for tapping the ZPE for practical use are under investigation in our laboratory. An early one of interest is based on the idea of a Casimir pinch effect in non-neutral plasmas, basically a plasma equivalent of Forward’s electromechanical charged-plate collapse. The underlying physics is described in a paper submitted for publication by myself and a colleague, and it is illustrative that the first of several patents issued to a consultant to our laboratory, K. R. Shoulders(1991), contains the descriptive phrase ” …energy  is provided… and the ultimate source of this energy appears to be the zero-point radiation of the vacuum continuum.” Another intriguing possibility is provided by the phenomenon of sonoluminescence, bubble collapse in an ultrasonically-driven fluid which is accompanied by intense, sub-nanosecond light radiation. Although the jury is still out as to the mechanism of light generation, Nobelist Julian Schwinger (1993) has argued for a Casimir interpretation. Possibly related experimental evidence for excess heat generation in ultrasonically-driven cavitation in heavy water is claimed in an EPRI Report by E-Quest Sciences ,although attributed to a nuclear micro-fusion process. Work is under way in our laboratory to see if this claim can be replicated.

Yet another proposal for ZPE extraction is described in a recent patent(Mead & Nachamkin, 1996). The approach proposes  the use of resonant dielectric spheres, slightly detuned from each other, to provide a beat-frequency downshift of the more energetic high-frequency components of the ZPE to a more easily captured form. We are discussing the possibility of a collaborative effort between us to determine whether such an approach is feasible. Finally, an approach utilizing micro-cavity techniques to perturb the ground state stability of atomic hydrogen is under consideration in our lab. It is based on a paper of mine (Puthoff, 1987) in which I put forth the hypothesis that then on radiative nature of the ground state  is due to a dynamic equilibrium in which radiation emitted due to accelerated electron ground state motion is compensated by absorption from the ZPE. If this hypothesis is true, there exists the potential for energy generation by the application of the techniques of so-called cavity quantum electrodynamics(QED). In cavity QED, excited atoms are passed through Casimir-like cavities whose structure suppresses electromagnetic cavity modes at the transition frequency between the atom’s excited and ground states. The result is that the so-called “spontaneous” emission time is lengthened considerably (for example, by factors of ten), simply because spontaneous emission is not so spontaneous after all, but rather is driven by vacuum fluctuations. Eliminate the modes, and you eliminate the zero point fluctuations of the modes, hence suppressing decay of the excited state. As stated in a review article on cavity QED in Scientific American, “An excited atom that would ordinarily emit a low-frequency photon can not do so, because there are no vacuum fluctuations to stimulate its emission….In its application to energy generation, mode suppression would be used to perturb the hypothesized dynamic ground state absorption/emission balance to lead to energy release.

An example in which Nature herself may have taken advantage of energetic vacuum effects is discussed in a model published by ZPE colleagues A. Rueda of California State University at Long Beach, B. Haisch of Lockheed-Martin,and D. Cole of IBM (1995). In a paper published in the Astrophysical Journal,they propose that the vast reaches of outer space constitute an ideal environment for ZPE acceleration of nuclei and thus provide a mechanism for “powering up” cosmic rays. Details of the model would appear to account for other observed phenomena as well, such as the formation of cosmic voids. This raises the possibility of utilizing a ” sub-cosmic-ray” approach to accelerate protons in a cryogenically-cooled, collision-free vacuum trap and thus extract energy from the vacuum fluctuations by this mechanism.

The Vacuum as the Source of Gravity and Inertia

What of the fundamental forces of gravity and inertia that we seek to overcome in space travel? We have phenomenological theories that describe their effects(Newton’s Laws and their relativistic generalizations), but what of their origins?

The first hint that these phenomena might themselves be traceable to roots in the underlying fluctuations of the vacuum came in a study published by the well-known Russian physicist Andrei Sakharov. Searching to derive Einstein’s phenomenological equations for general relativity from a more fundamental set of assumptions, Sakharov came to the conclusion that the entire panoply of general relativistic phenomena could be seen as induced effects brought about by changes  in the quantum-fluctuation energy of the vacuum due to the presence of matter. In this view the attractive gravitational force is more akin to the induced Casimir force discussed above, than to the fundamental inverse square law Coulomb force between charged particles with which it is often compared. Although speculative when first introduced by Sakharov this hypothesis has led to a rich and ongoing literature, including contributions of my own on quantum-fluctuation-induced gravity, aliterature that continues to yield deep insight into the role played by vacuum forces.

Given an apparent deep connection between gravity and the zero-point fluctuations of the vacuum, a similar connection must exist between these self same vacuum fluctuations and inertia. This is because it is an empirical fact that the gravitational and inertial masses have the same value, even though the underlying phenomena are quite disparate. Why, for example, should a measure of the resistance of a body to being accelerated, even if far from any gravitational  field, have the same value that  is associated with the gravitational attraction between bodies? Indeed, if one is determined by vacuum fluctuations, so must the other. To get to the heart of inertia, consider a specific example in which you are standing on a train in the station. As the train leaves the platform with a jolt, you could be thrown to the  floor. What  is  this force  that knocks you down,seemingly coming out of nowhere? This phenomenon, which we conveniently label inertia and go on about our physics, is a subtle feature of the universe that has perplexed generations of physicists from Newton to Einstein. Since in this example the sudden disquieting imbalance results from acceleration “relative to the fixed stars,” in its most provocative  form one could say that it was the “stars” that delivered the punch. This key feature was emphasized by the Austrian philosopher of science Ernst Mach, and is now known as Mach’s Principle. Nonetheless, the mechanism by which the stars might do this deed has eluded convincing explication.

Addressing this issue in a paper entitled “Inertia as a Zero-Point Field Lorentz Force,” my colleagues and I (Haisch, Rueda & Puthoff, 1994) were successful in tracing the problem of inertia and its connection to Mach’s Principle to the ZPE properties of the vacuum. In a sentence, although a uniformly moving body does not experience a drag  force from  the  (Lorentz-invariant)vacuum fluctuations, an accelerated body meets a resistance (force) proportional to the acceleration. By accelerated we mean, of course, accelerated relative to the fixed stars. It turns out that an argument can be made that the quantum fluctuations of distant matter structure the local vacuum-fluctuation frame of reference. Thus, in the example of the train the punch was delivered by the wall of vacuum fluctuations acting as a proxy for the fixed stars through which one attempted to accelerate.

The implication for space travel is this: Given the evidence generated in the field of cavity QED (discussed above), there is experimental evidence that vacuum  fluctuations can be altered by technological means. This leads to the corollary that, in principle, gravitational and inertial masses can also be altered. The possibility of altering mass with a view to easing the energy burden of future spaceships has been seriously considered by the Advanced Concepts Office of the Propulsion Directorate of the Phillips Laboratory at Edwards AirForce Base. Gravity researcher Robert Forward accepted an assignment to review this concept. His deliverable product was to recommend a broad, multipronged ef fort involving laboratories from around the world to investigate the inertia model experimentally. The Abstract reads in part:

Many researchers see the vacuum as a central ingredient of 21st-Century physics ….Some even believe the vacuum may be harnessed to provide a limitless supply of energy. This report summarizes an attempt to find an experiment that would test the Haisch,Rueda and Puthoff (HRP) conjecture that the mass and inertia of a body are induced effects brought about by changes in the quantum-fluctuation energy of the vacuum…. It was possible to find an experiment that might be able to prove or disprove that the inertial mass of a body can be altered by making changes in the vacuum surrounding the body.

With regard to action items, Forward in fact recommends a ranked list of not one but four experiments to be carried out to address the ZPE-inertia conceptand its broad implications. The recommendations included investigation of the proposed “sub-cosmic-ray energy device” mentioned earlier, and the investigation of an hypothesized “inertia-wind” effect proposed by our laboratory and possibly detected in early experimental work,though the latter possibility is highly speculative at this point.

Engineering the Vacuum For “Warp Drive”

Perhaps one of the most speculative, but nonetheless scientifically-grounded, proposals of all is the so-called Alcubierre Warp Drive. Taking on the challenge of determining whether Warp Drive a là Star Trek was a scientific possibility, general relativity theorist Miguel Alcubierre of the University of  Wales set himself  the task of determining whether faster-than light travel was possible within the constraints of standard theory. Although such clearly could not be the case in the flat space of special relativity, general relativity permits consideration of altered space time metrics where such a possibility is not a priori ruled out. Alcubierre ’s further self-imposed constraints on an acceptable solution included the requirements that no net time distortion should occur (breakfast on Earth, lunch on Alpha Centauri, and home for dinner with your wife and children, not your great-great-great grandchildren), and that the occupants of the spaceship were not to be flattened against the bulkhead by unconscionable accelerations.

A solution meeting all of the above requirements was found and published by Alcubierre in Classical and Quantum Gravity in 1994. The solution discovered by Alcubierre involved the creation of a local distortion of space time such that space time is expanded behind the spaceship, contracted ahead of it, and yields a hypersurfer-like motion faster than the speed of light as seen by observers outside the disturbed region. In essence, on the outgoing leg of its journey the spaceship is pushed away from Earth and pulled towards its distant destination by the engineered local expansion of space time itself. For followup on the broader aspects of “metric engineering” concepts, one can refer to apaper published  by myself  in Physics Essays (Puthoff,  1996). Interestingly enough, the engineering requirements rely on the generation of macroscopic,negative-energy-density, Casimir-like states in the quantum vacuum of the type discussed earlier. Unfortunately,meeting such requirements is beyond technological reach without some unforeseen breakthrough.

Related, of course, is the knowledge that general relativity permits the possibility of wormholes, topological tunnels which in principle could connect distant parts of the universe, a cosmic subway so to speak. Publishing in the American Journal of Physics, theorists Morris and Thorne initially outlined in some detail the requirements for traversible wormholes and have found that, in principle, the possibility exists provided one has access to Casimir-like, negative-energy-density quantum vacuum states. This has led to a rich literature, summarized recently in a book by Matt Visser of Washington University. Again, the technological requirements appear out of reach for the foreseeable future, perhaps awaiting new techniques for cohering the ZPE vacuum fluctuations in order to meet the energy-density requirements.

Where does this leave us? As we peer into the heavens from the depth of our gravity well, hoping for some “magic” solution that will launch our spacefarers first to the planets and then to the stars, we are reminded of Arthur C. Clarke’s phrase that highly-advanced technology is essentially indistinguishable from magic. Fortunately, such magic appears to be waiting in the wings of our deepening understanding of the quantum vacuum in which we live.

[Ad: Can the Vacuum Be Engineered for Space Flight Applications? Overview of  Theory and Experiments By H. E. Puthoff(PDF)]

Negative Energy: From Theory to Lab

Views of spacetime along the world line of a r...

Image via Wikipedia

Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut. The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.
One problem with Alcubierre’s original model that the interior of the warp bubble is causally disconnected from its forward edge. A starship captain on the inside cannot steer the bubble or turn it on or off; some external agency must set it up ahead of time. To get around this problem, Krasnikov proposed a “superluminal subway,” a tube of modified space-time (not the same as a wormhole) connecting Earth and a distant star. Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube. On the return journey, they could travel through it at warp speed. Like warp bubbles, the subway involves negative energy.

Almost every faster than light travel requires negative energies to be implemented at a very large densities. Today I came across a paper by E. W. Devis in which he described various experimental conditions under which negative energy can be generated in lab.

Examples of Exotic or “Negative” Energy

The exotic (energy condition-violating) mass-energy fields that are known to occur in nature are:� Radial electric or magnetic fields.

  • Radial electric or magnetic fields. These are borderline exotic, if their tension were infinitesimally larger,for a given energy density.�
  • Squeezed quantum states of the electromagnetic field and other squeezed quantum fields;
  • Gravitationally squeezed vacuum electromagnetic zero-point energy.
  • Other quantum fields/states/effects. In general, the local energy density in quantum field theory can benegative due to quantum coherence effects. Other examples that have been studied are Dirac field states: the superposition of two single particle electron states and thesuperposition of two multi-electron-positron states. In the former(latter), the energy densities can be negative when two single (multi-) particle states have the same number of electrons (electrons and positrons) or when one state has one more electron (electron-positron pair) than the other.

Since the laws of quantum field theory place no strong restrictions on negative energies and fluxes, then it might bepossible to produce gross macroscopic effects such as warp drive, traversable wormholes, violation of the second law of thermodynamics, and time machines. The above examples are representative forms of mass-energy that possess negativeenergy density or are borderline exotic.

Generating Negative Energy in Lab

Davis and Puthoff described various experiment to generate and detect negative energy in lab. Some of them are as:

1. Squeezed Quantum States: Substantial theoretical and experimental work has shown that in many quantum systems the limits to measurementprecision imposed by the quantum vacuum zero-point fluctuations (ZPF) can be breached by decreasing the noise inone observable (or measurable quantity) at the expense of increasing the noise in the conjugate observable; at thesame time the variations in the first observable, say the energy, are reduced below the ZPF such that the energybecomes “negative.” “Squeezing” is thus the control of quantum fluctuations and corresponding uncertainties,whereby one can squeeze the variance of one (physically important) observable quantity provided the variance in the(physically unimportant) conjugate variable is stretched/increased. The squeezed quantity possesses an unusuallylow variance, meaning less variance than would be expected on the basis of the equipartition theorem. One can inprinciple exploit quantum squeezing to extract energy from one place in the ordinary vacuum at the expense ofaccumulating excess energy elsewhere.

2. Gravitationally Squeezed Electromagnetic ZPF: A natural source of negative energy comes from the effect that gravitational fields (of astronomical bodies) in space have upon the surrounding vacuum. For example, the gravitational field of the Earth produces a zone of negative energy around it by dragging some of the virtual particle pairs (a.k.a. vacuum ZPF) downward. One can utilize the negative vacuum energy densities, which arise from distortion of the electromagnetic zero point fluctuations due to the interaction with a prescribed gravitational background, for providing a violation of the energy conditions. The squeezed quantum states of quantum optics provide a natural form of matter having negative energy density. The analysis, via quantum optics, shows that gravitation itself provides the mechanism for generating the squeezed vacuum states needed to support stable traversable wormholes. The production of negative energy densities via a squeezed vacuum is a necessary and unavoidable consequence of the interaction or coupling between ordinary matter and gravity, and this defines what is meant by gravitationally squeezed vacuum states.

The general result of the gravitational squeezing effect is that as the gravitational field strength increases, thenegative energy zone (surrounding the mass) also increases in strength. Table  shows when gravitational squeezing becomes important for example masses. The table shows that in the case of the Earth, Jupiter and the Sun, this squeeze effect is extremely feeble because only ZPF mode wavelengths above 0.2 m – 78 km are affected. For asolar mass black hole (radius of 2.95 km), the effect is still feeble because only ZPF mode wavelengths above 78 kmare affected. But also note in the table that Planck mass objects will have an enormously strong negative energyzone surrounding them because all ZPF mode wavelengths above 8.50* 10^-34 meters will be squeezed, in otherwords, all wavelengths of interest for vacuum fluctuations. Protons will have the strongest negative energy zone incomparison because the squeezing effect includes all ZPF mode wavelengths above 6.50* 10^-53 meters. Furthermore, a body smaller than a nuclear diameter (≈ 10^-16 m) and containing the mass of a mountain (≈10^11 kg)has a fairly strong negative energy zone because all ZPF mode wavelengths above 10􀀐15 meters will be squeezed.

We are presently unaware of any way to artificially generate gravitational squeezing of the vacuum in the laboratory. This will be left for future investigation. Aliens may help us!!

3. A Moving Mirror: Negative energy can be created by a single moving reflecting surface (a moving mirror). A mirror moving with increasing acceleration generates a flux of negative energy that emanates from its surface and flows out into the space ahead of the mirror. However, this effect is known to beexceedingly small, and it is not the most effective way to generate negative energy for our purposes.

4.Radial Electric/Magnetic Fields: It is beyond the scope to include all the technical configurations by which one can generate radialelectric or magnetic fields. Suffice it to say that ultrahigh-intensity table top lasers have been used to generate extreme electric and magnetic field strengths in the lab. Ultrahigh-intensity lasers use the chirped-pulse amplification (CPA) technique to boost the total output beam power. All laser systems simply repackage energy asa coherent package of optical power, but CPA lasers repackage the laser pulse itself during the amplification process. In typical high-power short-pulse laser systems, it is the peak intensity, not the energy or the fluence,which causes pulse distortion or laser damage. However, the CPA laser dissects a laser pulse according to itsfrequency components, and reorders it into a time-stretched lower-peak-intensity pulse of the same energy. This benign pulse can then be amplified safely to high energy, andthen only afterwards reconstituted as a very short pulse of enormous peak power – a pulse which could never itselfhave passed safely through the laser system (see Figure 2). Made more tractable in this way, the pulse can be amplified to substantial energies (with orders of magnitude greater peak power) without encountering intensity related problems.

The extreme output beam power, fields and physical conditions that have been achieved by ultrahigh-intensitytabletop lasers are:

  • Power Intensity 10^15 – 10^26 Watts/cm2 (10^30 W/cm2 using SLAC as a booster)
  • Peak Power Pulse ≤10^3 femtoseconds
  • E-fields ≈10^14 – 10^18 V/m [note: the critical quantum-electrodynamical (vacuum breakdown) field strengthis Ec = 2me^2c^3/he =10^18 V/m; me and e are the electron mass and charge]
  • B-fields ≈ several ≈10^6 Tesla [note: the critical quantum-electrodynamical (vacuum breakdown) field strength is Bc = Ec/c =10^10 Tesla]
  • Ponderomotive Acceleration of  Electrons ≈10^17 – 10^30 g’s (1 g = 9.81 m/sec2)
  • Light Pressure ≈10^9 – 10^15 bars
  • Plasma Temperatures > 10^10 K

Ultrahigh-intensity lasers can generate an electric field energy density of ~ 10^16 –10^28 J/m^3 and a magnetic field energy density of ~ 10^19 J/m^3. These energy densities are about the right order ofmagnitude to explore generating kilometer to AU sized wormholes. But that would be difficult to engineer on Earth. However, these energy densities are well above what would be required to explore the generation of micro wormholes in the lab.

5. Negative Energy from Squeezed Light: Negative energy can be generated by an array of ultrahigh intensity (femtosecond) lasers using an ultrafast rotatingmirror system. In this scheme  a laser beam is passed through an optical cavity resonator made of  lithium niobate (LiNbO3) crystal that is shaped like a cylinder with rounded silvered ends toreflect light. The resonator will act to produce a secondary lower frequency light beam in which the pattern ofphotons is rearranged into pairs. This is the quantum optical squeezing of light effect that we described previously.Therefore, the squeezed light beam emerging from the resonator will contain pulses of negative energy interspersedwith pulses of positive energy in accordance with the quantum squeezing model.

In this example both the negative and positive energy pulses are of ≈10^-15 second duration. We could arrange a setof rapidly rotating mirrors to separate the positive and negative energy pulses from each other. The light beam is tostrike each mirror surface at a very shallow angle while the rotation ensures that the negative energy pulses are reflected at a slightly different angle from the positive energy pulses. A small spatial separation of the two differentenergy pulses will occur at some distance from the rotating mirror. Another system of mirrors will be needed toredirect the negative energy pulses to an isolated location and concentrate them there. The rotating mirror system can actually be implemented via non-mechanical means.

Negative Energy Pulses Generated from Quantum Optical Squeezing. No other way to squeeze light would be to manufacture extremely reliable light pulses containing precisely one, two,or the squeezed (electromagnetic) vacuum state, the energy density is given by: placed within the squeezing cavity, and a laser beam is directed through the gas. The beam is reflected back onitself by a mirror to form a standing wave within the sodium chamber. This wave causes rapid variations in theoptical properties of the sodium thus causing rapid variations in the squeezed light so that we can induce rapid reflections of pulses by careful design.

On another note, when a quantum state is close to a squeezed vacuum state, there will almost always be some negative energy densities present, and the fluctuations in start to become nearly as large as the expectation value itself.

Observing Negative Energy in Lab:

Negative energy should be observable in lab experiments. The negative energy regions in space  is predicted to produce a unique signature corresponding to lensing, chromaticity and intensity effects in micro- and macro-lensing events on galactic and extragalactic/cosmological scales. It has been shown that these effects provide a specific signature that allows for discrimination between ordinary (positive mass-energy) and negative mass energy lenses via the spectral analysis of astronomical lensing events. Theoretical modeling of negative energylensing effects has led to intense astronomical searches for naturally occurring traversable wormholes in the universe. Computer model simulations and comparison of their results with recent satellite observations of gamma ray bursts (GRBs) has shown that putative negative energy (i.e., traversable wormhole)lensing events very closely resembles the main features of some GRBs. Current observational data suggests that large amounts of naturally occurring “exotic mass-energy” must have existed sometime between the epoch of galaxy formation and the present in order to (properly) quantitatively account for the“age-of-the-oldest-stars-in-the-galactic halo” problem and the cosmological evolution parameters.

When background light rays strike a negative energy lensing region, they are swept out of  the central region thus creating an umbra region of zero intensity. At the edges of the umbra the rays accumulate and create a rainbow-like caustic with enhanced light intensity. The lensing of a negative mass-energy region is not analogous to a diverging lens because in certain circumstances it can produce more light enhancement than does the lensing of an equivalent positive mass-energy region. Real background sources in lensing events can have non-uniform brightness distributions on their surfaces and a dependency of their emission with the observing frequency. These complications can result in chromaticity effects, i.e. in spectral changes induced by differential lensing during the event. The modeling of such effects is quite lengthy, somewhat model dependent, and with recent application only to astronomical lensing events. Suffice it to say that future work is necessary to scale down the predicted lensing parameters and characterize their effects for lab experiments in which the negative energy will not be of astronomical magnitude. Present ultrahigh-speed optics and optical cavities, lasers, photonic crystal (and switching)technology, sensitive nano-sensor technology, and other techniques are very likely capable of detecting the very small magnitude lensing effects expected in lab experiments.

Thus, it can be concluded as fact that negative energy can be generated in lab and will be no more referred as mere ‘fiction’. In a recent work it was suggested that naturally occurring wormholes can be detected since they attract charged particles which in turn, creates magnetic field. Well, I’ll review it and would present a model for realistic Warp Drive’.

[REF: Experimental Concepts for Generating Negative Energy in the Laboratory BY E. W. Devis and H. E. Puthoff]

Wormhole Induction Propulsion and Interstellar Travel: A Brief Review

Though it seems impossible to colonize galaxy at sub-light speed but without FTL travel we can still colonise the universe at sub-light velocities[ using self replicating probes and Bioprograms which I’ve discussed earlier], but the resulting colonies are separated from each other by the vastness of interstellar space. In the past trading empires have coped with time delays on commerce routes of the order of a few years at most. This suggests that economic zones would find it difficult to encompass more than one star system. Travelling beyond this would require significant re-orientation upon return, catching up with cultural changes etc. It’s unlikely people would routinely travel much beyond this and return.

wormhole could be constructed, by confining exotic matter to narrow regions to form the edges of three-dimensional space like a cube. The faces of the cube would resemble mirrors, except that the image is of the view from the other end of the wormhole. Although there is only one cube of material, it appears at two locations to the external observer. The cube links two ‘ends’ of a wormhole together. A traveller, avoiding the edges and crossing through a face of one of the cubes, experiences no stresses and emerges from the corresponding face of the other cube. The cube has no interior but merely facilitates passage from ‘one’ cube to the ‘other’.

The exotic nature of the edge material requires negative energy density and tension/pressure. But the laws of physics do not forbid such materials. The energy density of the vacuum may be negative, as is the Casimir field between two narrow conductors. Negative pressure fields, according to standard astrophysics, drove the expansion of the universe during its ‘inflationary’ phase. Cosmic string (another astrophysical speculation) has negative tension. The mass of negative energy the wormhole needs is just the amount to form a black hole if it were positive, normal energy. A traversable wormhole can be thought of as the negative energy counterpart to a black hole, and so justifies the appellation ‘white’ hole. The amount of negative energy required for a traversable wormhole scales with the linear dimensions of the wormhole mouth. A one meter cube entrance requires a negative mass of roughly 10^27 kg.

The problem with negative energy, employing as propulsion material, is that it’s too pesky to manage high energy densities of negative energy. Rapid interplanetary and interstellar space flight by means of spacetime wormholes is possible, in principle, whereby the traditional rocket propulsion approach can be abandoned in favor of a new paradigm involving the use of spacetime manipulation. In this scheme, the light speed barrier becomes irrelevant and spacecraft no longer need to carry large mass fractions of traditional chemical or nuclear propellants and related infrastructure over distances larger than several astronomical units (AU). Travel time over very large distances will be reduced by orders of magnitude.

In a previous work by Maccon, it was proposed that ultra-high magnetic field could generate a significant curvature in space-time fabric that could suffice a spacecraft to go through it. More specifically, Maccone claims that static homogeneous magnetic/electric fields with cylindrical symmetry can create spacetime curvature which manifests itself as a traversable wormhole. Although the claim of inducing spacetime curvature is correct, Levi-Civita’s metric solution is not a wormhole.[ref]

It is speculated that future WHIP spacecraft could deploy ultrahigh magnetic fields along with exotic matter- energy fields (e.g. radial electric or magnetic fields, Casimir energy field, etc.) in space to create a wormhole and then apply conventional space propulsion to move through the throat to reach the other side in a matter of minutes or days, whence the spacecraft emerges several AU’s or light-years away from its starting point. The requirement for conventional propulsion in WHIP spacecraft would be strictly limited by the need for short travel through the wormhole throat as well as for orbital maneuvering near distant worlds. The integrated system comprising the magnetic induction/exotic field wormhole and conventional propulsion units could be called WHIPIT or “Wormhole Induction Propulsion Integrated Technology.”

It is based on the concept that magnetic field can lead to distortion in space-time fabric governed by following equation:

where a= radius of curvature of  space-time

and B=MAGNETIC FIELD.

Further ‘B’ can be calculated from the equation:

where K=3.4840*10+18 and known as radius of curvature constant. ν0 is the gravitationally induced variation of light speed within the curvature region or say speed of spacecraft. ‘L’ is the length of solenoid.

[Technical Issues: Quoted]

Traversable wormholes are creatures of classical GTR and represent non-trivial topology change in the spacetime manifold. This makes mathematicians cringe because it raises the question of whether topology can change or fluctuate to accommodate wormhole creation. Black holes and naked singularities are also creatures of GTR representing non-trivial topology change in spacetime, yet they are accepted by the astrophysics and mathematical communities — the former by Hubble Space Telescope discoveries and the latter by theoretical arguments due to Kip Thorne, Stephen Hawking, Roger Penrose and others. The Bohm-Aharonov effect is another example which owes its existence to non-trivial topology change in the manifold. The topology change (censorship) theorems discussed in Visser (1995) make precise mathematical statements about the “mathematician’s topology” (topology of spacetime is fixed!), however, Visser correctly points out that this is a mathematical abstraction. In fact, Visser (1990) proved that the existence of an everywhere Lorentzian metric in spacetime is not a sufficient condition to prevent topology change. Furthermore, Visser (1990, 1995) elaborates that physical probes are not sensitive to this mathematical abstraction, but instead they typically couple to the geometrical features of space. Visser (1990) also showed that it is possible for geometrical effects to mimic the effects of topology change. Topology is too limited a tool to accurately characterize a generic traversable wormhole; in general one needs geometric information to detect the presence of a wormhole, or more precisely to locate the wormhole throat (Visser, private communication, 1997).

Levi-Civita’s spacetime metric is simply a hypercylinder with a position dependent gravitational potential: no asymptotically flat region, no flared-out wormhole mouth and no wormhole throat. Maccone’s equations for the radial (hyperbolic) pressure, stress and energy density of the “magnetic wormhole” configuration are thus incorrect.

What a wormhole mouth might look like to space travelers.

In addition, directing attention on the behavior of wormhole geometry at asymptotic infinity is not too profitable. Visser (private communication, 1997; Hochberg and Visser, 1997) demonstrates that it is only the behavior near the wormhole throat that is critical to understanding what is going on, and that a generic throat can be defined without having to make all the symmetry assumptions and without assuming the existence of an asymptotically flat spacetime to embed the wormhole in. One only needs to know the generic features of the geometry near the throat in order to guarantee violations of the null energy condition (NEC; see Hawking and Ellis, 1973) for certain open regions near the throat (Visser, private communication, 1997). There are general theorems of differential geometry that guarantee that there must be NEC violations (meaning exotic matter-energy is present) at a wormhole throat. In view of this, however, it is known that static radial electric or magnetic fields are borderline exotic when threading a wormhole if their tension were infinitesimally larger, for a given energy density (Herrmann, 1989; Hawking and Ellis, 1973). Other exotic (energy condition violating) matter-energy fields are known to be squeezed states of the electromagnetic field, Casimir (electromagnetic zero-point) energy and other quantum fields/states/effects. With respect to creating wormholes, these have the unfortunate reputation of alarming physicists. This is unfounded since all the energy condition hypotheses have been experimentally tested in the laboratory and experimentally shown to be false — 25 years before their formulation (Visser, 1990 and references cited therein). Violating the energy conditions commits no offense against nature.

Interstellar Travel and WHIP[Wormhole Induction Propulsion

WHIP spacecraft will have multifunction integrated technology for propulsion. The Wormhole Induction Propulsion Integrated Technology (WHIPIT) would entail two modes. The first mode is an advanced conventional system (chemical, nuclear fission/fusion, ion/plasma, antimatter, etc.) which would provide propulsion through the wormhole throat, orbital maneuvering capability near stellar or planetary bodies, and spacecraft attitude control and orbit corrections. An important system driver affecting mission performance and cost is the overall propellant mass-fraction required for this mode. A desirable constraint limiting this to acceptable (low) levels should be that an advanced conventional system would regenerate its onboard fuel supply internally or that it obtain and process its fuel supply from the situ space environment. Other important constraints and/or performance requirements to consider for this propulsion mode would include specific impulse, thrust, energy conversion schemes, etc.

Hypothetical view of two wormhole mouths patched to a hypercylinder curvature envelope. The small (large) configuration results from the radius of curvature induced by a larger (smaller) ultrahigh magnetic field.

The second WHIPIT mode is the stardrive component. This would provide the necessary propulsion to rapidly move the spacecraft over interplanetary or interstellar distances through a traversable wormhole. The system would generate a static, cylindrically symmetric ultrahigh magnetic field to create a hypercylinder curvature envelope (gravity well) near the spacecraft to pre-stress space into a pseudo-wormhole configuration. The radius of the hypercylinder envelope should be no smaller than the largest linear dimension of the spacecraft. As the spacecraft is gravitated into the envelope, the field-generator system then changes the cylindrical magnetic field into a radial configuration while giving it a tension that is greater than its energy density. A traversable wormhole throat is then induced near the spacecraft where the hypercylinder and throat geometries are patched together. The conventional propulsion mode then kicks on to nudge the spacecraft through the throat and send its occupants on their way to adventure. This scenario would apply if ultrahigh electric fields were employed instead. If optimization of wormhole throat (geometry) creation and hyperspace tunneling distance requires a fully exotic energy field to thread the throat, then the propulsion system would need to be capable of generating and deploying a Casimir (or other exotic) energy field. Although ultrahigh magnetic/electric and exotic field generation schemes are speculative and will be left for future work.

Practical Approach

The equasuggest a way to perform a laboratory experiment whereby one could apply a powerful static homogeneous (cylindrically symmetric) magnetic field in a vacuum, thereby creating spacetime curvature in principle, and measure the speed of a light beam through it. A measurable slowing of c in this arrangement would demonstrate that a curvature effect has been created in the experiment.

From Table I, it is apparent that laboratory magnetic field strengths would need to be > 109 – 1010 so that a significant radius of curvature and slowing of c can be measured. Experiments employing chemical explosive/implosive magnetic technologies would be an ideal arrangement for this. The limit of magnetic field generation for chemical explosives/implosives is and the quantum limit for ordinary metals is ~ 50,000 Tesla. Explosion/implosion work done by Russian (MC-1 generator, ISTC grant), Los Alamos National Lab (ATLAS), National High Magnetic Field Lab and Sandia National Lab (SATURN) investigators have employed magnetic solenoids of good homogeneity with lengths of ~ 10 cm, having peak rate-of-rise of field of 109 where a few nanoseconds is spent at 1000 Tesla, and which is long enough for a good measurement of c . Further, with picosecond pulses, c could be measured to a part in 102 or 103. At 1000 Tesla, c^2 – v^2(0) 0 m^2/sec^2 and the radius of curvature is 0.368  light-years.  If the peak rate-of-rise of field (~ 10^9 Tesla/sec) can be used, then a radius of curvature £ several *10^6 km can be generated along with c^2 – v^2(0) ≥ several * 10^4 m^2/sec^2.

It will be necessary to consider advancing the state-of-art of magnetic induction technologies in order to reach static field strengths that are > 109 – 1010Tesla. Extremely sensitive measurements of c at the one part in 106 or 107 level may be necessary for laboratory experiments involving field strengths of~ 109 Tesla. Magnetic induction technologies based on nuclear explosives/implosives may need to be seriously considered in order to achieve large magnitude results. An order of magnitude calculation indicates that magnetic fields generated by nuclear pulsed energy methods could be magnified to (brief) static values of ³ 109 Tesla by factors of the nuclear-to-chemical binding energy ratio (³ 106). Other experimental methods employing CW lasers, repetitive-pulse free electron lasers, neutron beam-pumped UO2 lasers, pulsed laser-plasma interactions or pulsed hot (theta pinch) plasmas either generate insufficient magnetic field strengths for our purposes or cannot generate them at all within their operating modes.[Ref]

Tha’s why I find it quite interesting and whenever we would be capable of generating such a high magnetic field, I think it would be a prevailing propulsion technology. This effect can be used to create a wormhole by patching the hypercylinder envelope to a throat that is induced by either radially stressing the ultrahigh field or employing additional exotic energy.

[Ref: Wormhole Induction Propulsion (WHIP) by Eric W. Davis and Maccone, C. (1995) “Interstellar Travel Through Magnetic Wormholes”, JBIS, Vol. 48, No. 11, pp. 453-458]

Promising Applications of Carbon Nano Tubes(CNT)

Scientific research on carbon nanotubes witnessed a large expansion. The fact that CNTs can be produced by relative simple synthesis processes and their record breaking properties lead to numerous demonstration of many different types of applications ranging from building fast field effect transistors, flat screens, transparent electrodes, electrodes for rechargeable batteries, conducting polymer composites, bullet proof textiles and transparent loudspeakers.

As a result we have seen enormous progress in controlling growth of CNTs. CNTs with controlled diameter can be grown along a given direction parallel to the surface or perpendicular to the surface. In recent years we have seen narrow diameter CNTs with 2 and more walls to be grown at high yield.

Behind this progress one might forget about the remaining tantalizing challenges. Samples of CNTs still contain a large amount of disordered forms of carbon, catalytic metal particles or parts of the growth support are still a large fraction of the carbon nanotube mass. CNT as produced continue to contain a relative wide dispersion of diameters and lengths. To disperse CNTs and control their distribution in a matrix or on a given surface is still a challenge. There has been enormous progress in size selecting CNTs. However, the often applied techniques limit application to due to the presence of surfactant molecules or cannot be applied to larger volumes.

Analytical techniques are playing an important role when developing new synthesis, purification and separation processes. Screening of carbon nanotubes is essential for any real world application but is also essential for their fundamental understanding such as the understanding the effect of tube bundling, doping and the role of defects.

At the ‘Centre for materials elaboration and structural studies‘, Professor Wolfgang Bacsa and Pascal Puech and have much focused in screening CNTs with optical methods and developing physical processes for carbon nanotubes working closely with the materials chemists at different local institutions. We have much focused our attention on double wall carbon nanotubes grown form the catalytic chemical vapour deposition technique.

Their small diameter, high electrical conductivity and their large length as well as the fact that the inner wall is protected from the environment by the outer wall, are all good attributes for incorporating them in polymer composites. Depending on the synthesis process used we find the two walls are at times strongly or weakly coupled.

By studying their Raman spectra at high pressure, in acids, strongly photo excited or on individual tubes we can observe the effect on the internal and external walls. A good knowledge of Raman spectra of double wall CNTs gives us the opportunity to map the Raman signal of the ultra thin slices of composites and determine the distribution, agglomeration state and interaction with the matrix.


TEM images (V Tishkova CEMES-CNRS) of double wall (a), industrial multiwall carbon nanotubes (b) and Raman G band of double wall CNTs at high pressure in different pressure media revealing molecular nanoscal pressure effects7.

Working on individual CNTs in collaboration with Anna Swan of Boston University, gave us the opportunity to work on precisely positioned individual suspended CNTs. An individual CNT is an ideal point source and this can be used to map out the focal spot and to learn about the fundamental limitations of high resolution grating spectrometers.

The field of carbon nanotube research has grown enormously during the last decade making it difficult to follow all the new results in this field. It is quite clear that applications where macroscopic amounts of CNTs are needed, standardisation of measurement protocols, classification of CNT samples, combined with new processing techniques to deal with large CNT volumes will be needed. Applications where only minute quantities on a surface are used, suffer from the fact that no parallel processing is still limited. This shows further progress in growing CNT on surfaces is still needed although they have been a recent break through in growing CNT in a parallel fashion and with preferential seminconducting or metallic tubes.

[SOURCE: azonano]

Time Travel Paradoxes

By David Lewis

TIME travel, I maintain, is possible. The paradoxes of time travel are oddities, not impossibilities. They prove only this much, which few would have doubted: that a possible world where time travel took place would be a most strange world, different in fundamental ways from the world we think is ours. I shall be concerned here with the sort of time travel that is recounted in science fiction. Not all science fiction writers are clear-headed, to be sure, and inconsistent time travel stories have often been written. But some writers have thought the problems through with great care, and their stories are perfectly consistent.

If I can defend the consistency of some science fiction stories of time travel, then I suppose parallel defenses might be given of some controversial physical hypotheses, such as the hypothesis that time is circular or the hypothesis that there are particles that travel faster than light. But I shall not explore these parallels here. What is time travel? Inevitably, it involves a discrepancy between time and time. Any traveler departs and then arrives at his destination; the time elapsed from departure to arrival (positive, or perhaps zero) is the duration of the journey. But if he is a time traveler, the separation in time between departure and arrival does not equal the duration of his journey. He departs; he travels for an hour, let us say; then he arrives. The time he reaches is not the time one hour after his departure. It is later, if he has traveled toward the future; earlier, if he has traveled toward the past. If he has traveled far toward the past, it is earlier even than his departure. How can it be that the same two events, his departure and his arrival, are separated by two unequal amounts of time? It is tempting to reply that there must be two independent time dimensions; that for time travel to be possible, time must be not a line but a plane.2 Then a pair of events may have two unequal separations if they are separated more in one of the time dimensions than in the other. The lives of common people occupy straight diagonal lines across the plane of time, sloping at a rate of exactly one hour of time1 per hour of time. The life of the time traveler occupies a bent path, of varying slope.

On closer inspection, however, this account seems not to give us time travel as we know it from the stories. When the traveler revisits the days of his childhood, will his playmates be there to meet him? No; he has not reached the part of the plane of time where they are. He is no longer separated from them along one of the two dimensions of time, but he is still separated from them along the other. I do not say that two-dimensional time is impossible, or that there is no way to square it with the usual conception of what time travel would be like. Nevertheless I shall say no more about two-dimensional time. Let us set it aside, and see how time travel is possible even in one-dimensional time.

The world—the time traveler’s world, or ours—is a four-dimensional manifold of events. Time is one dimension of the four, like the spatial dimensions except that the prevailing laws of nature discriminate between time and the others—or rather, perhaps, between various timelike dimensions and various spacelike dimensions. (Time remains one-dimensional, since no two timelike dimensions are orthogonal.) Enduring things are timelike streaks: wholes composed of temporal parts, or stages, located at various times and places. Change is qualitative difference between different stages—different temporal parts—of some enduring thing, just as a “change” in scenery from east to west is a qualitative difference between the eastern and western spatial parts of the landscape. If this paper should change your mind about the possibility of time travel, there will be a difference of opinion between two different temporal parts of you, the stage that started reading and the subsequent stage that finishes. If change is qualitative difference between temporal parts of something, then what doesn’t have temporal parts can’t change. For instance, numbers can’t change; nor can the events of any moment of time, since they cannot be subdivided into dissimilar temporal parts. (We have set aside the case of two-dimensional time, and hence the possibility that an event might be momentary along one time dimension but divisible along the other.) It is essential to distinguish change from “Cambridge change,” which can befall anything. Even a number can “change” from being to not being the rate of exchange between pounds and dollars. Even a momentary event can “change” from being a year ago to being a year and a day ago, or from being forgotten to being remembered. But these are not genuine changes. Not just any old reversal in truth value of a time-sensitive sentence about something makes a change in the thing itself.

A time traveler, like anyone else, is a streak through the manifold of space-time, a whole composed of stages located at various times and places. But he is not a streak like other streaks. If he travels toward the past he is a zig-zag streak, doubling back on himself. If he travels toward the future, he is a stretched-out streak. And if he travels either way instantaneously, so that there are no intermediate stages between the stage that departs and the stage that arrives and his journey has zero duration, then he is a broken streak. I asked how it could be that the same two events were separated by two unequal amounts of time, and I set aside the reply that time might have two independent dimensions. Instead I reply by distinguishing time itself, external time as I shall also call it, from the personal time of a particular time traveler: roughly, that which is measured by his wristwatch. His journey takes an hour of his personal time, let us say; his wristwatch reads an hour later at arrival than at departure. But the arrival is more than an hour after the departure in external time, if he travels toward the future; or the arrival is before the departure in external time (or less than an hour after), if he travels toward the past. That is only rough. I do not wish to define personal time operationally, making wristwatches infallible by definition. That which is measured by my own wristwatch often disagrees with external time, yet I am no time traveler; what my misregulated wristwatch measures is neither time itself nor my personal time. Instead of an operational definition, we need a functional definition of personal time; it is that which occupies a certain role in the pattern of events that comprise the time traveler’s life. If you take the stages of a common person, they manifest certain regularities with respect to external time. Properties change continuously as you go along, for the most part, and in familiar ways. First come infantile stages. Last come senile ones. Memories accumulate. Food digests. Hair grows. Wristwatch hands move.

If you take the stages of a time traveler instead, they do not manifest the common regularities with respect to external time. But there is one way to assign coordinates to the time traveler’s stages, and one way only (apart from the arbitrary choice of a zero point), so that the regularities that hold with respect to this assignment match those that commonly hold with respect to external time. With respect to the correct assignment properties change continuously as you go along, for the most part, and in familiar ways. First come infantile stages. Last come senile ones. Memories accumulate. Food digests. Hair grows. Wristwatch hands move. The assignment of coordinates that yields this match is the time traveler’s personal time. It isn’t really time, but it plays the role in his life that time plays in the life of a common person. It’s enough like time so that we can—with due caution— transplant our temporal vocabulary to it in discussing his affairs. We can say without contradiction, as the time traveler prepares to set out, “Soon he will be in the past.”

 

SVG version of http://en.wikipedia.org/wiki/Im...

Image via Wikipedia

 

We mean that a stage of him is slightly later in his personal time, but much earlier in external time, than the stage of him that is present as we say the sentence. We may assign locations in the time traveler’s personal time not only to his stages themselves but also to the events that go on around him. Soon Caesar will die, long ago; that is, a stage slightly later in the time traveler’s personal time than his present stage, but long ago in external time, is simultaneous with Caesar’s death. We could even extend the assignment of personal time to events that are not part of the time traveler’s life, and not simultaneous with any of his stages. If his funeral in ancient Egypt is separated from his death by three days of external time and his death is separated from his birth by three score years and ten of his personal time, then we may add the two intervals and say that his funeral follows his birth by three score years and ten and three days of extended personal time. Likewise a bystander might truly say, three years after the last departure of another famous time traveler, that “he may even now—if I may use the phrase—be wandering on some plesiosaurus- haunted oolitic coral reef, or beside the lonely saline seas of the Triassic Age.” If the time traveler does wander on an oolitic coral reef three years after his departure in his personal time, then it is no mistake to say with respect to his extended personal time that the wandering is taking place “even now”. We may liken intervals of external time to distances as the crow flies, and intervals of personal time to distances along a winding path. The time traveler’s life is like a mountain railway. The place two miles due east of here may also be nine miles down the line, in the westbound direction. Clearly we are not dealing here with two independent dimensions. Just as distance along the railway is not a fourth spatial dimension, so a time traveler’s personal time is not a second dimension of How far down the line some place is depends on its location in three-dimensional space, and likewise the locations of events in personal time depend on their locations in one-dimensional external time. Five miles down the line from here is a place where the line goes under a trestle; two miles further is a place where the line goes over a trestle; these places are one and the same. The trestle by which the line crosses over itself has two different locations along the line, five miles down from here and also seven. In the same way, an event in a time traveler’s life may have more than one location in his personal time. If he doubles back toward the past, but not too far, he may be able to talk to himself. The conversation involves two of his stages, separated in his personal time but simultaneous in external time. The location of the conversation in personal time should be the location of the stage involved in it. But there are two such stages; to share the locations of both, the conversation must be assigned two different locations in personal time.

The more we extend the assignment of personal time outwards from the time traveler’s stages to the surrounding events, the more will such events acquire multiple locations. It may happen also, as we have already seen, that events that are not simultaneous in external time will be assigned the same location in personal time—or rather, that at least one of the locations of one will be the same as at least one of the locations of the other. So extension must not be carried too far, lest the location of events in extended personal time lose its utility as a means of keeping track of their roles in the time traveler’s history. A time traveler who talks to himself, on the telephone perhaps, looks for all the world like two different people talking to each other. It isn’t quite right to say that the whole of him is in two places at once, since neither of the two stages involved in the conversation is the whole of him, or even the whole of the part of him that is located at the (external) time of the conversation. What’s true is that he, unlike the rest of us, has two different complete stages located at the same time at different places. What reason have I, then, to regard him as one person and not two? What unites his stages, including the simultaneous ones, into a single person?

The problem of personal identity is especially acute if he is the sort of time traveler whose journeys are instantaneous, a broken streak consisting of several unconnected segments. Then the natural way to regard him as more than one person is to take each segment as a different person. No one of them is a time traveler, and the peculiarity of the situation comes to this: all but one of these several people vanish into thin air, all but another one appear out of thin air, and there are remarkable resemblances between one at his appearance and another at his vanishing. Why isn’t that at least as good a description as the one I gave, on which the several segments are all parts of one time traveler? I answer that what unites the stages (or segments) of a time traveler is the same sort of mental, or mostly mental, continuity and connectedness that unites anyone else. The only difference is that whereas a common person is connected and continuous with respect to external time, the time traveler is connected and continuous only with respect to his own personal time. Taking the stages in order, mental (and bodily) change is mostly gradual rather than sudden, and at no point is there sudden change in too many different respects all at once. (We can include position in external time among the respects we keep track of, if we like. It may change discontinuously with respect to personal time if not too much else changes discontinuously along with it.) Moreover, there is not too much change altogether. Plenty of traits and traces last a lifetime. Finally, the connectedness and the continuity are not accidental. They are explicable; and further, they are explained by the fact that the properties of each stage depend causally on those of the stages just before in personal time, the dependence being such as tends to keep things the same. To see the purpose of my final requirement of causal continuity, let us see how it excludes a case of counterfeit time travel. Fred was created out of thin air, as if in the midst of life; he lived a while, then died. He was created by a demon, and the demon had chosen at random what Fred was to be like at the moment of his creation. Much later someone else, Sam, came to resemble Fred as he was when first created. At the very moment when the resemblance became perfect, the demon destroyed Sam.

Fred and Sam together are very much like a single person: a time traveler whose personal time starts at Sam’s birth, goes on to Sam’s destruction and Fred’s creation, and goes on from there to Fred’s death. Taken in this order, the stages of Fred-cum Sam have the proper connectedness and continuity. But they lack causal continuity, so Fred-cum-Sam is not one person and not a time traveler. Perhaps it was pure coincidence that Fred at his creation and Sam at his destruction were exactly alike; then the connectedness and continuity of Fred-cum-Sam across the crucial point are accidental. Perhaps instead the demon remembered what Fred was like, guided Sam toward perfect resemblance, watched his progress, and destroyed him at the right moment. Then the connectedness and continuity of Fred-cum-Sam has a causal explanation, but of the wrong sort. Either way, Fred’s first stages do not depend causally for their properties on Sam’s last stages. So the case of Fred and Sam is rightly disqualified as a case of personal identity and as a case of time travel.

We might expect that when a time traveler visits the past there will be reversals of causation. You may punch his face before he leaves, causing his eye to blacken centuries ago. Indeed, travel into the past necessarily involves reversed causation. For time travel requires personal identity—he who arrives must be the same person who departed. That requires causal continuity, in which causation runs from earlier to later stages in the order of personal time. But the orders of personal and external time disagree at some point, and there we have causation that runs from later to earlier stages in the order of external time. Elsewhere I have given an analysis of causation in terms of chains of counterfactual dependence, and I took care that my analysis would not rule out casual reversal a priori. I think I can argue (but not here) that under my analysis the direction of counterfactual dependence and causation is governed by the direction of other de facto asymmetries of time. If so, then reversed causation and time travel are not excluded altogether, but can occur only where there are local exceptions to these asymmetries. As I said at the outset, the time traveler’s world would be a most strange one. Stranger still, if there are local—but only local causal reversals, then there may also be causal loops: closed causal chains in which some of the causal links are normal in direction and others are reversed. (Perhaps there must be loops if there is reversal: I am not sure.) Each event on the loop has a causal explanation, being caused by events elsewhere on the loop. That is not to say that the loop as a whole is caused or explicable. It may not be. Its inexplicability is especially remarkable if it is made up of the sort of causal processes that transmit information. Recall the time traveler who talked to himself. He talked to himself about time travel, and in the course of the conversation his older self told his younger self how to build a time machine. That information was available in no other way. His older self knew how because his younger self had been told and the information had been preserved by the causal processes that constitute recording, storage, and retrieval of memory traces. His younger self knew, after the conversation, because his older self had known and the information had been preserved by the causal processes that constitute telling.

But where did the information come from in the first place? Why did the whole affair happen? There is simply no answer. The parts of the loop are explicable, the whole of it is not. Strange! But not impossible, and not too different from inexplicabilities we are already inured to. Almost everyone agrees that God, or the Big Bang, or the entire infinite past of the universe, or the decay of a tritium atom, is uncaused and inexplicable. Then if these are possible, why not also the inexplicable causal loops that arise in the time travel? I have committed a circularity in order not to talk about too much at once, and this is a good place to set it right. In explaining personal time, I presupposed that we were entitled to regard certain stages as comprising a single person. Then in explaining what united the stages into a single person, I presupposed that we were given a personal time order for them. The proper way to proceed is to define personhood and personal time simultaneously, as follows. Suppose given a pair of an aggregate of persona-stages, regarded as a candidate for personhood, and an assignment of coordinates to those stages, regarded as a candidate for his personal time. If the stages satisfy the conditions given in my circular explanation with respect to the assignment of coordinates, then both candidates succeed: the stages do comprise a person and the assignment is his personal time.

I have argued so far that what goes on in a time travel story may be a possible pattern of events in four dimensional space-time with no extra time dimension; that it may be correct to regard the scattered stages of the alleged time traveler as comprising a single person; and that we may legitimately assign to those stages and their surroundings a personal time order that disagrees sometimes with their order in external time. Some might concede all this, but protest that the impossibility of time travel is revealed after all when we ask not what the time traveler does, but what he could do. Could a time traveler change the past? It seems not: the events of a past moment could no more change than numbers could. Yet it seems that he would be as able as anyone to do things that would change the past if he did them. If a time traveler visiting the past both could and couldn’t do something that would change it, then there cannot possibly be such a time traveler. Consider Tim. He detests his grandfather, whose success in the munitions trade built the family fortune that paid for Tim’s time machine. Tim would like nothing so much as to kill Grandfather, but alas he is too late. Grandfather died in his bed in 1957, while Tim was a young boy. But when Tim has built his time machine and traveled to 1920, suddenly he realizes that he is not too late after all. He buys a rifle; he spends long hours in target practice; he shadows Grandfather to learn the route of his daily walk to the munitions works; he rents a room along the route; and there he lurks, one winter day in 1921, rifle loaded, hate in his heart, as Grandfather walks closer, closer,. . . .

Tim can kill Grandfather. He has what it takes. Conditions are perfect in every way: the best rifle money could buy, Grandfather an easy target only twenty yards away, not a breeze, door securely locked against intruders. Tim a good shot to begin with and now at the peak of training, and so on. What’s to stop him? The forces of logic will not stay his hand! No powerful chaperone stands by to defend the past from interference. (To imagine such a chaperone, as some authors do, is a boring evasion, not needed to make Tim’s story consistent.) In short, Tim is as much able to kill Grandfather as anyone ever is to kill anyone. Suppose that down the street another sniper, Tom, lurks waiting for another victim, Grandfather’s partner. Tom is not a time traveler, but otherwise he is just like Tim: same make of rifle, same murderous intent, same everything. We can even suppose that Tom, like Tim, believes himself to be a time traveler. Someone has gone to a lot of trouble to deceive Tom into thinking so. There’s no doubt that Tom can kill his victim; and Tim has everything going for him that Tom does. By any ordinary standards of ability, Tim can kill Grandfather. Tim cannot kill Grandfather. Grandfather lived, so to kill him would be to change the past. But the events of a past moment are not subdivisible into temporal parts and therefore cannot change. Either the events of 1921 timelessly do include Tim’s killing of Grandfather, or else they timelessly don’t. We may be tempted to speak of the “original” 1921 that lies in Tim’s personal past, many years before his birth, in which Grandfather lived; and of the “new” 1921 in which Tim now finds himself waiting in ambush to kill Grandfather. But if we do speak so, we merely confer two names on one thing. The events of 1921 are doubly located in Tim’s (extended) personal time, like the trestle on the railway, but the “original” 1921 and the “new” 1921 are one and the same.

If Tim did not kill Grandfather in the “original” 1921, then if he does kill Grandfather in the “new” 1921, he must both kill and not kill Grandfather in 1921—in the one and only 1921, which is both the “new” and the “original” 1921. It is logically impossible that Tim should change the past by killing Grandfather in 1921. So Tim cannot kill Grandfather. Not that past moments are special; no more can anyone change the present or the future. Present and future momentary events no more have temporal parts than past ones do. You cannot change a present or future event from what it was originally to what it is after you change it. What you can do is to change the present or the future from the unactualized way they would have been without some action of yours to the way they actually are. But that is not an actual change: not a difference between two successive actualities. And Tim can certainly do as much; he changes the past from the unactualized way it would have been without him to the one and only way it actually is. To “change” the past in this way, Tim need not do anything momentous; it is enough just to be there, however unobtrusively. You know, of course, roughly how the story of Tim must go on if it is to be consistent: he somehow fails. Since Tim didn’t kill Grandfather in the “original” 1921, consistency demands that neither does he kill Grandfather in the “new” 1921. Why not? For some commonplace reason.

Perhaps some noise distracts him at the last moment, perhaps he misses despite all his target practice, perhaps his nerve fails, perhaps he even feels a pang of unaccustomed mercy. His failure by no means proves that he was not really able to kill Grandfather. We often try and fail to do what we are able to do. Success at some tasks requires not only ability but also luck, and lack of luck is not a temporary lack of ability. Suppose our other sniper, Tom, fails to kill Grandfather’s partner for the same reason, whatever it is, that Tim fails to kill Grandfather. It does not follow that Tom was unable to. No more does it follow in Tim’s case that he was unable to do what he did not succeed in doing. We have this seeming contradiction: “Tim doesn’t, but can, because he has what it takes” versus “Tim doesn’t, and can’t, because it’s logically impossible to change the past.” I reply that there is no contradiction. Both conclusions are true, and for the reasons given. They are compatible because “can” is equivocal. To say that something can happen means that its happening is compossible with certain facts. Which facts? That is determined, but sometimes not determined well enough, by context. An ape can’t speak a human language— say, Finnish—but I can. Facts about the anatomy and operation of the ape’s larynx and nervous system are not compossible with his speaking Finnish.

The corresponding facts about my larynx and nervous system are compossible with my speaking Finnish. But don’t take me along to Helsinki as your interpreter: I can’t speak Finnish. My speaking Finnish is compossible with the facts considered so far, but not with further facts about my lack of training. What I can do, relative to one set of facts, I cannot do, relative to another, more inclusive, set. Whenever the context leaves it open which facts are to count as relevant, it is possible to equivocate about whether I can speak Finnish. It is likewise possible to equivocate about whether it is possible for me to speak Finnish, or whether I am able to, or whether I have the ability or capacity or power or potentiality to. Our many words for much the same thing are little help since they do not seem to correspond to different fixed delineations of the relevant facts.

Tim’s killing Grandfather that day in 1921 is compossible with a fairly rich set of facts: the facts about his rifle, his skill and training, the unobstructed line of fire, the locked door and the absence of any chaperone to defend the past, and so on. Indeed it is compossible with all the facts of the sorts we would ordinarily count as relevant is saying what someone can do. It is compossible with all the facts corresponding to those we deem relevant in Tom’s case. Relative to these facts, Tim can kill Grandfather. But his killing Grandfather is not compossible with another, more inclusive set of facts. There is the simple fact that Grandfather was not killed. Also there are various other facts about Grandfather’s doings after 1921 and their effects: Grandfather begat Father in 1922 and Father begat Tim in 1949. Relative to these facts, Tim cannot kill Grandfather. He can and he can’t, but under different delineations of the relevant facts. You can reasonably choose the narrower delineation, and say that he can; or the wider delineation, and say that he can’t. But choose. What you mustn’t do is waver, say in the same breath that he both can and can’t, and then claim that this contradiction proves that time travel is impossible. Exactly the same goes for Tom’s parallel failure.

For Tom to kill Grandfather’s partner also is compossible with all facts of the sorts we ordinarily count as relevant, but not compossible with a larger set including, for instance, the fact that the intended victim lived until 1934. In Tom’s case we are not puzzled. We say without hesitation that he can do it, because we see at once that the facts that are not compossible with his success are facts about the future of the time in question and therefore not the sort of facts we count as relevant in saying what Tom can do. In Tim’s case it is harder to keep track of which facts are relevant. We are accustomed to exclude facts about the future of the time in question, but to include some facts about its past. Our standards do not apply unequivocally to the crucial facts in this special case: Tim’s failure, Grandfather’s survival, and his subsequent doings. If we have foremost in mind that they lie in the external future of that moment in 1921 when Tim is almost ready to shoot, then we exclude them just as we exclude the parallel facts in Tom’s case. But if we have foremost in mind that they precede that moment in Tim’s extended personal time, then we tend to include them. To make the latter be foremost in your mind, I chose to tell Tim’s story in the order of his personal time, rather than in the order of external time. The fact of Grandfather’s survival until 1957 had already been told before I got to the part of the story about Tim lurking in ambush to kill him in 1921. We must decide, if we can, whether to treat these personally past and externally future facts as if they were straightforwardly past or as if they were straightforwardly future.

Fatalists—the best of them—are philosophers who take facts we count as irrelevant in saying what someone can do, disguise them somehow as facts of a different sort that we count as relevant, and thereby argue that we can do less than we think—indeed, that there is nothing at all that we don’t do but can. I am not going to vote Republican next fall. The fatalist argues that, strange to say, I not only won’t but can’t; for my voting Republican is not compossible with the fact that it was true already in the year 1548 that I was not going to vote Republican 428 years later. My rejoinder is that this is a fact, sure enough; however, it is an irrelevant fact about the future masquerading as a relevant fact about the past, and so should be left out of account in saying what, in any ordinary sense, I can do. We are unlikely to be fooled by the fatalist’s methods of disguise in this case, or other ordinary cases. But in cases of time travel, precognition, or the like, we’re on less familiar ground, so it may take less of a disguise to fool us. Also, new methods of disguise are available, thanks to the device of personal time.

Here’s another bit of fatalist trickery. Tim, as he lurks, already knows that he will fail. At least he has the wherewithal to know it if he thinks, he knows it implicitly. For he remembers that Grandfather was alive when he was a boy, he knows that those who are killed are thereafter not alive, he knows (let us suppose) that he is a time traveler who has reached the same 1921 that lies in his personal past, and he ought to understand—as we do— why a time traveler cannot change the past. What is known cannot be false. So his success is not only not compossible with facts that belong to the external future and his personal past, but also is not compossible with the present fact of his knowledge that he will fail. I reply that the fact of his foreknowledge, at the moment while he waits to shoot, is not a fact entirely about that moment. It may be divided into two parts. There is the fact that he then believes (perhaps only implicitly) that he will fail; and there is the further fact that his belief is correct, and correct not at all by accident, and hence qualifies as an item of knowledge. It is only the latter fact that is not compossible with his success, but it is only the former that is entirely about the moment in question. In calling Tim’s state at that moment knowledge, not just belief, facts about personally earlier but externally later moments were smuggled into consideration. I have argued that Tim’s case and Tom’s are alike, except that in Tim’s case we are more tempted than usual— and with reason—to opt for a semi-fatalist mode of speech. But perhaps they differ in another way. In Tom’s case, we can expect a perfectly consistent answer to the counterfactual question: what if Tom had killed Grandfather’s partner? Tim’s case is more difficult. If Tim had killed Grandfather, it seems offhand that contradictions would have been true. The killing both would and wouldn’t have occurred. No Grandfather, no Father; no Father, no Tim; no Tim, no killing. And for good measure: no Grandfather, no family fortune; no fortune, no time machine; no time machine, no killing. So the supposition that Tim killed Grandfather seems impossible in more than the semi-fatalistic sense already granted. If you suppose Tim to kill Grandfather and hold all the rest of his story fixed, of course you get a contradiction. But likewise if you suppose Tom to kill Grandfather’s partner and hold the rest of his story fixed—including the part that told of his failure—you get a contradiction. If you make any counterfactual supposition and hold all else fixed you get a contradiction. The thing to do is rather to make the counterfactual supposition and hold all else as close to fixed as you consistently can. That procedure will yield perfectly consistent answers to the question: what if Tim had not killed Grandfather?

In that case, some of the story I told would not have been true. Perhaps Tim might have been the time-traveling grandson of someone else. Perhaps he might have been the grandson of a man killed in 1921 and miraculously resurrected. Perhaps he might have been not a time traveler at all, but rather someone created out of nothing in 1920 equipped with false memories of a personal past that never was. It is hard to say what is the least revision of Tim’s story to make it true that Tim kills Grandfather, but certainly the contradictory story in which the killing both does and doesn’t occur is not the least revision. Hence it is false (according to the unrevised story) that if Tim had killed Grandfather then contradictions would have been true. What difference would it make if Tim travels in branching time?

Suppose that at the possible world of Tim’s story the space-time manifold branches; the branches are separated not in time, and not in space, but in some other way. Tim travels not only in time but also from one branch to another. In one branch Tim is absent from the events of 1921; Grandfather lives; Tim is born, grows up, and vanishes in his time machine. The other branch diverges from the first when Tim turns up in 1920; there Tim kills Grandfather and Grandfather leaves no descendants and no fortune; the events of the two branches differ more and more from that time on. Certainly this is a consistent story; it is a story in which Grandfather both is and isn’t killed in 1921 (in the different branches); and it is a story in which Tim, by killing Grandfather, succeeds in preventing his own birth (in one of the branches). But it is not a story in which Tim’s killing of Grandfather both does occur and doesn’t: it simply does, though it is located in one branch and not the other. And it is not a story in which Tim changes the past. 1921 and later years contain the events of both branches, coexisting somehow without interaction. It remains true at all the personal times of Tim’s life, even after the killing, that Grandfather lives in one branch and dies in the other.

[Credit: David Lewis]
[PDF version of article: Time Travel Paradoxes by David Lewis]

Black Holes Serving as Particle Accelerator

A particle collision at the RHIC. No strangele...

Image via Wikipedia

Black Hole Particle Accelerator!! Sounds so strange !! Well, this is not that much strange as it may purport. Particle accelerators are devices which are generally used to raise the particles at very high energy levels.

Beams of high-energy particles are useful for both fundamental and applied research in the sciences, and also in many technical and industrial fields unrelated to fundamental research. It has been estimated that there are approximately 26,000 accelerators worldwide. Of these, only about 1% are the research machines with energies above 1 GeV (that are the main focus of this article), about 44% are for radiotherapy, about 41% for ion implantation, about 9% for industrial processing and research, and about 4% for biomedical and other low-energy research.

For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as essentially 2-body interactions of the quarks and gluons of which they are composed. Thus elementary particle physicists tend to use machines creating beams of electrons, positrons, protons, and anti-protons, interacting with each other or with the simplest nuclei (e.g., hydrogen or deuterium) at the highest possible energies, generally hundreds of GeV or more. Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure, interactions, and properties of the nuclei themselves, and of condensed matter at extremely high temperatures and densities, such as might have occurred in the first moments of the Big Bang. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon.

[Image Details: A typical Cyclotron]

In current we accelerate particles at high energy levels by increasing the kinetic  energy of a particle and applying a very high electromagnetic field. Particles are accelerated according to Lorentz Force. However there are some limitations of such particle accelerators like we can’t accelerate them at very high energy levels. It needs a lot of  distance to be covered up before acquiring a desired speed.

This can be simply accumulated from the astounding details of LHC. The precise circumference of the LHC accelerator is 26 659 m, with a total of 9300 magnets inside. Not only is the LHC the world’s largest particle accelerator, just one-eighth of its cryogenic distribution system would qualify as the world’s largest fridge. It can accelerate particles upto the energy level of 14.0 TeV.

As it is pretty obvious that to accelerate particles above this energy level it would become almost imposssible.

An advanced civilization with the development level of typeIII or type IV would more likely choose to implement black holes rather than engineering a LHC or Tevatron at astrophysical scale. Kaluza Klein black holes are excellent for this purpose. Kaluza Klein black holes are very similar to Kerr black holes except they are charged.

Kerr Black Holes

Kerr spacetime is the unique explicitly defined model of the gravitational field of a rotating star. The spacetime is fully revealed only when the star collapses, leaving a black hole — otherwise the bulk of the star blocks exploration. The qualitative character of Kerr spacetime depends on its mass and its rate of rotation, the most interesting case being when the rotation is slow. (If the rotation stops completely, Kerr spacetime reduces to Schwarzschild spacetime.)

The existence of black holes in our universe is generally accepted — by now it would be hard for astronomers to run the universe without them. Everyone knows that no light can escape from a black hole, but convincing evidence for their existence is provided their effect on their visible neighbors, as when an observable star behaves like one of a binary pair but no companion is visible.

Suppose that, travelling our spacecraft, we approach an isolated, slowly rotating black hole. It can then be observed as a black disk against the stars of the background sky. Explorers familiar with the Schwarzschild black holes will refuse to cross its boundary horizon. First of all, return trips through a horizon are never possible, and in the Schwarzschild case, there is a more immediate objection: after the passage, any material object will, in a fraction of a second, be devoured by a singularity in spacetime.

If we dare to penetrate the horizon of this Kerr black hole we will find … another horizon. Behind this, the singularity in spacetime now appears, not as a central focus, but as a ring — a circle of infinite gravitational forces. Fortunately, this ring singularity is not quite as dangerous as the Schwarzschild one — it is possible to avoid it and enter a new region of spacetime, by passing through either of two “throats” bounded by the ring (see The Big Picture).

In the new region, escape from the ring singularity is easy because the gravitational effect of the black hole is reversed — it now repels rather than attracts. As distance increases, this negative gravity weakens, just as on the positive side, until its effect becomes negligible.

A quick departure may be prudent, but will prevent discovery of something strange: the ring singularity is the outer equator of a spatial solid torus that is, quite simply, a time machine. Travelling within it, one can reach arbitrarily far back into the past of any entity inside the double horizons. In principle you can arrange a bridge game, with all four players being you yourself, at different ages. But there is no way to meet Julius Caesar or your (predeparture) childhood self since these lie on the other side of two impassable horizons.

This rough description is reasonably accurate within its limits, but its apparent completeness is deceptive. Kerr spacetime is vaster — and more symmetrical. Outside the horizons, it turns out that the model described above lacks a distant past, and, on the negative gravity side, a distant future. Harder to imagine are the deficiencies of the spacetime region between the two horizons. This region definitely does not resemble the Newtonian 3-spacebetween two bounding spheres, furnished with a clock to tell time. In it, space and time are turbulently mixed. Pebbles dropped experimentally there can simply vanish in finite time — and new objects can magically appear.

Recently, it was made an interesting observation that black holes can accelerate particles up to unlimited energies Ecm in the centre of mass frame. These results have been obtained for the Kerr metric (they were also extended to the extremal Kerr-Newman one. It was demonstrated that the effect in question exists in a generic black hole background (so a black hole can be surrounded by matter) provided a black hole is rotating. Thus, rotation seemed to be an essential part of the effect. It is also necessary that one of colliding particles have the angular momentum L1 = E1/ωH  where E is the energy, ωH is the angular velocity of a generic rotating black hole. If  ωH→0, L1 →1, so for any particles  with finite L the effect becomes impossible. Say, in the Schwarzschild space-time, the ratio Ecm/m (m is the mass of particles) is finite and cannot exceed 2√5 for particles coming from infinity.

Meanwhile, sometimes the role played by the angular momentum and rotation, is effectively modeled by the electric charge and potential in the spherically-symmetric space-times. So, one may ask the question: can we achieve the infinite acceleration without rotation, simply due to the presence of the electric charge? Apart from interest on its own., the positive answer would be also important in that spherically-symmetric space-times are usually much simpler and admit much more detailed investigation, mimicking relevant features of rotating space-times. In a research paper by Oleg B. Zaslavskii, they showed that centre of mass energy can reach reach up to very high level and may gain almost infinite centre of mass energy before collision. Following the analysis and energy equations ,the answer is ‘Yes!’ .
The similar conclusion were also extracted by Pu Zion Mao in a research paper ‘Kaluza-Klein Balck holes Serving as Particle Acclerator.
Consider two mass particles are falling into a black hole having angular momentum of L1 and L2.

Obviously, plot r  and centre of mass energy near horizon of Kaluza Klein Black Hole in Fig.1 and Fig.2, from which we can see that there exists a critical
angular momentum Lc = 2μ/√(1-ν²)  for the geodesics of particle to reach the horizon. If L > Lc, the geodesics never reach the horizon. On the other hand, if the angular momentum is too small, the particle will fall into the black hole and the CM energy for the collision is limited. However, when L1 or L2 takes the angular momentum L = 2μ/√(1-ν²) , the CM energy is unlimited with no restrictions on the angular momentum per unit mass J/M of the black hole.
Now, it seems very mesmerizing that advanced alien civilization would more likely prefer to implement black holes as particle accelerator. However, that implementation could be moderated.

Dark Flow, Gravity and Love

By Rob Bryanton

The above video is tied to a previous blog entry from last January of the same name, Dark Flow.

Last time, in Placebos and Biocentrism, we returned to the idea that so much of what we talk about with this project is tied to visualizing our reality from “outside” of spacetime, a perspective that many of the great minds of the last hundred years have also tried to get us to embrace. Here’s a quote I just came across from Max Planck that I think is particularly powerful:

As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter.

Image-Golden ratio line

Image via Wikipedia

It’s so easy to look at some of the phrases from this quote and imagine them on some new age site, where mainstream scientists would then smugly dismiss these ideas as hogwash from crackpots. Like me, folks like Dan Winter and Nassim Haramein also sometimes get painted with the crackpot brush, but they are both serious about the ideas they are exploring, and they are not far away from the ideas that Max Planck promoted above, or that I have been pursuing with my project.

On July 1st of this year, I published a well-received blog entry called Love and Gravity. It looked at some new age ideas about wellness and spirituality, and related them to some mainstream science ideas about extra dimensions, timelessness, and the fact that physicists tell us that gravity is the only force which exerts itself across the extra dimensions.

Last week Dan Winter forwarded me a link to a new web page of his which yet again seems to tie into the same viewpoint that I’m promoting: Dan is calling this new page “Gravity is Love“. As usual, this page is a sprawling collection of graphics, animations, and articles, most of which are found on a number of Dan’s other pages, but there’s important new information here as well. Here’s a few paragraphs excerpted from the page which will give you the flavor of what Dan is saying about this concept:

Love really IS the nature of gravity!
First we discovered Golden Ratio identifies the change in pressure over time- of the TOUCH that says I LOVE YOU: goldenmean.info/touch
Then we discovered (with Korotkov’s help) … that the moment of peak perception- bliss – enlightenment- was defined by Golden Ratio in brainwaves :goldenmean.info/clinicalintro
Then medicine discovered: the healthy heart is a fractal heart. ( References/ pictures:goldenmean.info/holarchy, and also: goldenmean.info/heartmathmistake
Then – I pioneered the proof that Golden Ratio perfects fractality because being perfect wave interference it is therefore perfect compression. It is my view that all centripetal forces- like gravity, life, consciousness, and black holes, are CAUSED by Golden Ratio in waves of charge.
Nassim Haramein says that although he sees Golden Ratio emerge from his black hole equations repeatedly – he sees it as an effect of black holes/ gravity – not the cause… Clearly – from the logic of waves – I say the black hole / gravity is the effect of golden ratio and not the other way around!
– although some might say that this is a chicken and egg difference – may be just semantics… at least we agree on the profound importance of Golden RATIO…/ fractality…
AND love :

perfect embedding IS perfect fusion IS perfect compression… ah the romance.

Dan Winter is a fascinating fellow, I hope you can spend some time following the links in the above quote. Next time we’re going to look at another somewhat related approach to imagining the extra-dimensional patterns that link us all together, in an entry called Biosemiotics: Monkeys, Metallica, and Music.

Enjoy the journey!

Rob Bryanton

Hyperluminal Travel Without Exotic Matter

Listen terrestrial intelligent species: WeirdSciences is going to delve into a new idea of making interstellar travel feasible and this time no negative energy is to be used to propel spacecraft. Though implementing  negative energy to make warp drive is not that bad, but you need to refresh your mind.

By Eric Baird

Alcubierre’s 1994 paper on hyperfast travel has generated fresh interest in the subject of warp drives but work on the subject of hyper-fast travel is often hampered by confusion over definitions — how do we define times and speeds over extended regions of spacetime where the geometry is not Euclidean? Faced with this problem it may seem natural to define a spaceship’s travel times according to round-trip observations made from the traveller’s point of origin, but this “round-trip” approach normally requires signals to travel in two opposing spatial directions through the same metric, and only gives us an unambiguous reading of apparent shortened journey-times if the signal speeds are enhanced in both directions along the signal path, a condition that seems to require a negative energy density in the region. Since hyper-fast travel only requires that the speed of light-signals be enhanced in the actual direction of travel, we argue that the precondition of bidirectionality (inherited from special relativity, and the apparent source of the negative energy requirement), is unnecessary, and perhaps misleading.

When considering warp-drive problems, it is useful to remind ourselves of what it is that we are trying to accomplish. To achieve hyper-fast package delivery between two physical markers, A (the point of origin)  and B (the destination), we require that a package moved from A to B:

a) . . . leaves A at an agreed time according to clocks at A,
b) . . . arrives at B as early as possible according to clocks at B, and, ideally,
c) . . . measures their own journey time to be as short as possible.

From a purely practical standpoint as “Superluminal Couriers Inc.”, we do not care how long the arrival event takes to be seen back at A, nor do we care whether the clocks at A and B appear to be properly synchronised during the delivery process. Our only task is to take a payload from A at a specified “A time” and deliver it to B at possible “B time”, preferably without the package ageing excessively en route. If we can collect the necessary local time-stamps on our delivery docket at the various stages of the journey, we have achieved our objective and can expect payment from our customer.

Existing approaches tend to add a fourth condition:

d) . . . that the arrival-event at B is seen to occur as soon as possible by an observer back at A.

This last condition is much more difficult to meet, but is arguably more important to our ability to define distant time-intervals than to the actual physical delivery process itself. It does not dictate which events may be intersected by the worldline of the travelling object, but can affect the coordinate labels that we choose to assign to those events using special relativity.

  • Who Asked Your Opinion?

If we introduce an appropriate anisotropy in the speed of light to the region occupied by our delivery path, a package can travel to its destination along the path faster than “nominal background lightspeed” without exceeding the local speed of light along the path. This allows us to meet conditions 2(a) and 2(b), but the same isotropy causes an increased transit time for signals returning from B to A, so the “fast” outward journey can appear to take longer when viewed from A.

This behaviour can be illustrated by the extreme example of a package being delivered to the interior of a black hole from a “normal” region of spacetime. When an object falls through a gravitational event horizon, current theory allows its supposed inward velocity to exceed the nominal speed of light in the external environment, and to actually tend towards vINWARDS=¥ as the object approaches a black hole’s central singularity. But the exterior observer, A, could argue that the delivery is not only proceeding more slowly than usual, but that the apparent delivery time is actually infinite, since the package is never actually seen (by A) to pass through the horizon.

Should A’s low perception of the speed of the infalling object indicate that hyperfast travel has not been achieved? In the author’s opinion, it should not — if the package has successfully been collected from A and delivered to B with the appropriate local timestamps indicating hyperfast travel, then A’s subsequent opinion on how long the delivery is seen to take (an observation affected by the properties of light in a direction other than that of the travelling package) would not seem to be of secondary importance. In our “black hole” example, exotic matter or negative energy densities are not required unless we demand that an external observer should be able to see the package proceeding superluminally, in which case a negative gravitational effect would allow signals to pass back outwards through the r=2M surface to the observer at the despatch-point (without this return path, special relativity will tend to define the time of the unseen delivery event as being more-than-infinitely far into A’s future).

Negative energy density is required here only for appearances sake (and to make it easier for us to define the range of possible arrival-events that would imply that hyperfast travel has occurred), not for physical package delivery.

  • Hyperfast Return Journeys and Verification

It is all very well to be able to reach our destination in a small local time period, and to claim that we have travelled there at hyperfast speeds, but how do we convince others that our own short transit-time measurements are not simply due to time-dilation effects or to an “incorrect” concept of simultaneity? To convince observers at our destination, we only have to ask that they study their records for the observed behaviour of our point of origin — if the warpfield is applied to the entire journey-path (“Krasnikov tube configuration”), then the introduction and removal of the field will be associated with an increase and decrease in the rate at which signals from A arrive at B along the path (and will force special relativity to redefine the supposed simultaneity of events at B and A). If the warpfield only applies in the vicinity of the travelling package, other odd effects will be seen when the leading edge of the warpfield reaches the observer at B (the logical problems associated with the conflictng “lightspeeds” at the leading edge of a travelling warpfield wavefront been highlighted by Low, and will be discussed in a further paper). Our initial definitions of the distances involved should of course be based on measurements taken outside the region of spacetime occupied by the warpfield.

A more convincing way of demonstrating hyper-fast travel would be to send a package from A to B and back again in a shorter period of “A-time” than would normally be required for a round-trip light-beam. We must be careful here not to let our initial mathematical definitions get in the way of our task — although we have supposed that the speed of light towards B was slower while our warpdrive was operating on the outward journey, this artificially-reduced return speed does not have to also apply during our subsequent return trip, since we have the option of simply switching the warpdrive off, or better still, reversing its polarity for the journey home.

Although a single path allowing enhanced signal speeds in both directions at the same time would seem to require a negative energy-density, this feature is not necessary for a hyper-fast round trip — the outward and return paths can be separated in time (with the region having different gravitational properties during the outward and return trips) or in space (with different routes being taken for the outward and return journeys).

  • Caveats and Qualifications

Special relativity is designed around the assumption of Euclidean space and the stipulation that lightspeed is assumed to be isotropic, and neither of these assumptions is reliable for regions of spacetime that contain gravitational fields.

If we have a genuine lightspeed anisotropy that allows an object to move hyper-quickly between A and B, special relativity can respond by using the round-trip characteristics of light along the transit path to redefine the simultaneity of events at both locations, so that the “early” arrival event at B is redefined far enough into A’s future to guarantee a description in which the object is moving at less than cBACKGROUND.
This retrospective redefinition of times easily leads to definitional inconsistencies in warpdrive problems — If a package is sent from A to B and back to A again, and each journey is “ultrafast” thanks to a convenient gravitational gradient for each trip, one could invoke special relativity to declare that each individual trip has a speed less than cBACKGROUND, and then take the ultrafast arrival time of the package back at A as evidence that some form of reverse time travel has occurred , when in fact the apparent negative time component is an artifact of our repeated redefinition of the simultaneity of worldlines at A and B. Since it has been known for some time that similar definitional breakdowns in distant simultaneity can occur when an observer simply alters speed (the “one gee times one lightyear” limit quoted in MTW ), these breakdowns should not be taken too seriously when they reappear in more complex “warpdrive” problems.

Olum’s suggested method for defining simultaneity and hyperfast travel (calibration via signals sent through neighbouring regions of effectively-flat spacetime) is not easily applied to our earlier black hole example, because of the lack of a reference-path that bypasses the gravitational gradient (unless we take a reference-path previous to the formation of the black hole), but warpdrive scenarios tend instead to involve higher-order gravitational effects (e.g. gradients caused by so-called “non-Newtonian” forces ), and in these situations the concept of “relative height” in a gravitational field is often route-dependent (the concept “downhill” becomes a “local” rather than a “global” property, and gravitational rankings become intransitive). For this class of problem, Olum’s approach would seem to be the preferred method.

  • What’s the conclusion?

In order to be able to cross interstellar distances at enhanced speeds, we only require that the speed of light is greater in the direction in which we want to travel, in the region that we are travelling through, at the particular time that we are travelling through it. Although negative energy-densities would seem to be needed to increase the speed of light in both directions along the same path at the same time, this additional condition is only required for any hyperfast travel to be “obvious” to an observer at the origin point, which is a stronger condition than merely requiring that packages be delivered arbitrarily quickly. Hyperfast return journeys would also seem to be legal (along a pair of spatially separated or time-separated paths), as long as the associated energy-requirement is “paid for” somehow. Breakdowns in transitive logic and in the definitions used by special relativity already occur with some existing “legal” gravitational situations, and their reappearance in warpdrive problems is not in itself proof that these problems are paradoxical.

Arguments against negative energy-densities do not rule out paths that allow gravity-assisted travel at speeds greater than cBACKGROUND, provided that we are careful not to apply the conventions of special relativity inappropriately. Such paths do occur readily under general relativity, although it has to be admitted that some of the more extreme examples have a tendency to lead to unpleasant regions (such as the interiors of black holes) that one would not normally want to visit.

[Ref: Miguel Alcubierre, “The warp drive: hyper-fast travel within general relativity,”  Class. Quantum Grav. 11 L73-L77 (1994), Michael Spzir, “Spacetime hypersurfing,”  American Scientist 82 422-423 (Sept/Oct 1994), Robert L. Forward, “Guidelines to Antigravity,”  American Journal of Physics 31 (3) 166-170 (1963). ]