Trapping the Antimatter!

Creating matter’s strange cousin antimatter is tricky, but holding onto it is even trickier. Now scientists are working on a new device that may be able to trap antimatter long enough to study it.
Antimatter is like a mirror image of matter. For every matter particle (say an electron, for example), a matching antimatter particle is thought to exist (in this case, a positron) with the same mass, but an opposite charge.

The problem is that whenever antimatter comes into contact with regular matter, the two annihilate. So any container or bottle made of matter that attempts to capture antimatter inside would be instantly destroyed, along with the precious antimatter sample one tried to put inside the bottle.

Physicist Clifford Surko of the University of California, San Diegois hard at work to overcome that issue. He and his colleagues are building what they call the world’s largest trap for low-energy positrons – a device they say will be able to store more than a trillion antimatter particles at once.

The key is using magnetic and electric fields, instead of matter, to construct the walls of an antimatter “bottle.”

“We are now working to accumulate trillions of positrons or more in a novel ‘multicell’ trap– an array of magnetic bottles akin to a hotel with many rooms, with each room containing tens of billions of antiparticles.”

Surko presented his work today (Feb. 18) here at the annual meeting of the American Association for the Advancement of Science.

The researchers are also developing methods to cool antiparticles to super-cold temperatures so that the particles’ movements are slowedand they can be studied. The scientists also want to compress large clouds of antiparticles into high-density clumps that can be tailored for practical applications.

“One can then carefully push them out of the bottle in a thin stream, a beam, much like squeezing a tube of toothpaste. These beams provide new ways to study how antiparticles interact or react with ordinary matter. They are very useful, for example, in understanding the properties of material surfaces.”

Surko said another project is to create a portable antimatter bottle that could be taken out of the lab and into various industrial and medical situations.
“If you could have a portable trap it would greatly amplify the uses and applications of antimatter in our world.”

Antimatter may sound exotic, butit’s already used in everyday technology, such as medical PET (Positron Emission Tomography) scanners. During a PET scan, the patient is injected with radioactive tracer molecules that emit positrons when they decay. These positrons then come into contact with electrons in the body, and the two annihilate, releasing two gamma-ray photons. The gamma-ray photons are then detected by the scanner, giving a 3-D image of what’s going on inside the body.
[Via: LiveScience]

Multifunctional Carbon Nanotubes – Introduction and Applications of Multifunctional Carbon Nanotubes

This animation of a rotating carbon nanotube g...

Image via Wikipedia

Over the past several decades there has been an explosive growth in research and development related to nano materials. Among these one material, carbon Nanotubes, has led the way in terms of its fascinating structure as well as its ability to provide function-specific applications ranging from electronics, to energy and biotechnology. Carbon nanotubes (CNTs) can be viewed as carbon whiskers, which are tubules of nanometer dimensions with properties close to that of an ideal graphite fiber. Due to their distinctive structures they can be considered as matter in one-dimension (1D).

In other words, a carbon nanotube is a honeycomb lattice rolled on to itself, with diameters of the order of nanometers and lengths of up to several micrometers. Generally, two distinct types of CNTs exist depending whether the tubes are made of more than one graphene sheet (multi walled carbon nanotube, MWNT) or only one graphene sheet (single walled carbon nanotube, SWNT). For a detailed description on CNTs please refer to the article by Prof. M. Endo.

A Truly Multifunctional Material

Irrespective of the number of walls, CNTs are envisioned as new engineering materials which possess unique physical properties suitable for a variety of applications. Such properties include large mechanical strength, exotic electrical characteristics and superb chemical and thermal stability. Specifically, the development of techniques for growing carbon nanotubes in a very controlled fashion (such as aligned CNT architectures on various substrates ) as well as on a large scale, presents investigators all over the world with enhanced possibilities for applying these controlled CNTs architectures to the fields of Vacuum microelectronics, Cold-cathode flat panel displays, Field emission devices, Vertical interconnect assemblies, Gas breakdown sensors, Bio Filtration, On chip thermal management, etc.

Apart from their outstanding structural integrity as well as chemical stability, the property that makes carbon nanotubes truly multifunctional in nature is the fact that carbon nanotubes have lot to offer (literally) in terms of specific surface area. Depending on the type of CNTs the specific surface areas may range from 50 m2/gm to several hundreds of m2/gm and with appropriate purification processes the specific surface areas can be increased up to ~1000 m2/gm.

Extensive theoretical and experimental studies have shown that the presence of large specific surface areas is accompanied by the availability of different adsorption sites on the nanotubes. For example, In CNTs produced using catalyst assisted chemical vapor deposition the adsorption occurs only on the outer surface of the curved cylindrical wall of the CNTs. This is because the production process of the CNTs using metal catalysts usually leads to nanotubes with closed ends, thereby restricting the access of the hollow interior space of the tube.

However, there are simple procedures (mild chemical or thermal treatments) which can remove the end caps of the MWNTs thereby presenting the possibility of another adsorption site (inside the tube) in MWNTs as schematically shown in Figure 1. Similarly, the large scale production process of SWNTs lead to the bundling of the SWNTs. Due to this bundling effect, SWNT bundles provide various high energy binding sites (for example grooves, Figure 1.). What this means is then that large surfaces are available in small volume and these surfaces can interact with other species or can be tailored and functionalized.

Figure 1: Possible binding sites available for adsorption on (left) MWNTs and (right) SWNTs surfaces.

Our group’s own research interests are directed into utilizing these materials in different applications related to energy and the environment, where their high specific surfaces areas play a crucial role. Two of such energy related applications are discussed below:

  • CNT Based Electrochemical Double Layer Capacitors
  • CNT Based catalyst support

CNT Based Electrochemical Double Layer Capacitors

Electrochemical Double Layer Capacitors (EDLC’s: Also referred to as Super Capacitors and Ultra-Capacitors) are envisioned as devices that will have the capability of providing high energy density as well as high power density. With extremely high life-span and charge-discharge cycle capabilities EDLC’s are finding versatile applications in the military, space, transportation, telecommunications and nanoelectronics industries.

An EDLC contains two non reactive porous plates (electrodes or collectors with extremely high specific surface area), separated by a porous membrane and immersed in an electrolyte. Various studies have shown the suitability of CNTs as EDLC electrodes. However, proper integration of CNTs with collector electrodes in EDLCs are needed for minimizing the overall device resistance in order to enhance the performance of CNT based supercapacitors. A strategy for achieving this could be growing CNTs directly on metal surfaces and using them as EDLC electrodes (Figure 2). EDLC electrodes with very low equivalent series resistance (ESR) and high power densities can be obtained by using such approaches.

Figure 2: (a) Artist rendition of EDLC formed by aligned MWNT grown directly on metals (b) An electrochemical impedance spectroscopy plot showing low ESR of such EDLC devices and (c) very symmetric and near rectangular cyclic voltamograms of such devices indicating impressive capacitance behavior.

CNT Based Catalyst Support

Catalysts play an important role in our existence today. Catalysts are small particles (~ 10-9 meter, or nanometer) which due to their unique surface properties can enhance important chemical reactions leading to useful products. In any kind of catalytic process, the catalysts are dispersed on high surface area materials, known as the catalyst support. The support provides mechanical strength to the catalysts in addition to enhance the specific catalytic surface and enhancing the reaction rates. CNTs, due to their high specific surface areas, outstanding mechanical as well as thermal properties and chemically stability can potentially become the material of choice for catalyst support in a variety of catalyzed chemical reactions.

We are presently exploring the idea of using CNTs as catalyst support in the Fischer Tropsch (FT) synthesis process. The FT reaction can convert a mixture of carbon monoxide and hydrogen in to a wide range of straight chained and branched olefins and paraffins and oxygenates (leading to the production of high quality synthetic fuels). Our preliminary FT synthesis experiments on CNT supported FT catalysts (generally cobalt and iron) shows that the conversion of CO and H2 obtained with FT catalyst loaded CNTs is orders of magnitude higher than that obtained with conventional FT catalysts (Figure 3), indicating that CNTs offer a new breed of non-oxide based catalyst supports with superior performance for FT synthesis.

Figure 3:CNT paper used as catalyst support for FT synthesis and comparison of conversion ratio’s of Co and H2

So far, CNT research has provided substantial excitement, and novel possibilities in developing applications based on interdisciplinary nanotechnology. The area of large scale growth of CNTs is quiet mature now and hence it could be expected that several solid large volume applications will emerge in the near future.

[Source: Azonano]

Can the Vacuum Be Engineered for Space Flight Applications? Overview of Theory and Experiments

A Feynman diagram showing the radiation of a g...

Image via Wikipedia

By H. E. Puthoff

Quantum theory predicts, and experiments verify, that empty space (the vacuum) contains an enormous residual background energy known as zero-point energy (ZPE). Originally thought to be of significance only for such esoteric concerns as small perturbations to atomic emission processes, it is now known to play a role in large-scale phenomena of interest to technologists as well, such as the inhibition of spontaneous emission, the generation of short-range attractive forces (e.g., the Casimir force), and the possibility of accounting for sonoluminescence phenomena. ZPE topics of interest for spaceflight applications range from fundamental issues (where does inertia come from, can it be controlled?), through laboratory attempts toextract useful energy from vacuum fluctuations (can the ZPE be “mined” for practical use?), to scientifically grounded extrapolations concerning “engineering the vacuum” (is “warp-drive” space propulsion a scientific possibility?). Recent advances in research into the physics of the underlying  ZPE indicate the possibility of potential application in all these areas of interest.

The concept “engineering the vacuum” was first introduced by Nobel Laureate T. D. Lee  in his book Particle Physics and Introduction to Field Theory. As stated in Lee’s book: ” The experimental method to alter the properties of the vacuum may be called vacuum engineering…. If  indeed we are able to alter the vacuum, then we may encounter some new phenomena, totally unexpected.” Recent experiments have indeed shown this to be the case.

With regard  to space propulsion,  the question of engineering the vacuum can be put succinctly: ” Can empty space itself provide the solution?” Surprisingly enough, there are hints that potential help may in fact emerge quite literally out of the vacuum of so-called ” empty space.” Quantum theory  tells us that empty space is not truly empty, but rather is the seat of myriad energetic quantum processes  that  could have profound  implications  for  future  space travel. To understand these  implications it will serve us to review briefly the historical development of the scientific view of what constitutes empty space.

At the time of the Greek philosophers, Democritus argued that empty space was truly a void, otherwise there would not be room for the motion of atoms. Aristotle, on the other hand, argued equally forcefully that what appeared to be empty space was in fact a plenum (a background filled with substance), for did not heat and light travel from place to place as if carried by some kind of medium? The argument went back and forth through the centuries until finally codified by Maxwell’s  theory of  the  luminiferous  ether, a plenum  that  carried electromagnetic waves, including light, much as water carries waves across its surface. Attempts  to measure  the properties of  this ether, or to measure the Earth’s velocity through the ether (as in the Michelson-Morley experiment), however, met with failure. With the rise of special relativity which did not require reference to such an underlying substrate, Einstein in 1905 effectively banished the ether in favor of the concept that empty space constitutes a true void. Ten years later, however, Einstein’s own development of the general theory of relativity with its  concept of curved space and distorted geometry forced him to reverse his stand and opt for a richly-endowed plenum, under the new label spacetime metric.

It was the advent of modern quantum theory, however, that established the quantum vacuum, so-called empty space, as a very active place, with particlesarising and disappearing, a virtual plasma, and fields continuously fluctuatingabout their zero baseline values. The energy associated with such processes iscalled zero-point energy (ZPE), reflecting the fact that such activity remains even at absolute zero.

The Vacuum As A Potential Energy Source

At its most fundamental level, we now recognize that the quantum vacuum is an enormous reservoir of untapped energy, with energy densities conservatively estimated by Feynman and Hibbs to be on the order of nuclear energy densities or greater. Therefore, the question is, can the ZPE be “mined” for practical use? If so, it would constitute a virtually ubiquitous energy supply, a veritable “Holy Grail” energy source for space propulsion.

As utopian as such a possibility may seem, physicist Robert Forward at Hughes Research Laboratories demonstrated proof-of-principle in apaper,  “Extracting Electrical Energy from the Vacuum by Cohesion of Charged Foliated Conductors.” Forward’s approach exploited a phenomenon called the Casimir Effect, an attractive quantum force between closely-spaced metal plates, named for its discoverer, H. G. B. Casimir of Philips Laboratories in the Netherlands. The Casimir force, recently measured with high accuracy by S. K. Lamoreaux at the University of Washington, derives from partial shielding of  the interior region of the plates from the background zero-point fluctuations of the vacuum electromagnetic field. As shown by Los Alamos theorists Milonni  et  al., this shielding results in the plates being pushed together by the unbalanced ZPE radiation pressures. The result is a corollary conversion of vacuum energy to some other  form such as heat. Proof  that such a process violates neither energy nor  thermodynamic constraints can be found in a paper by a colleague and myself (Cole & Puthoff ) under the title “Extracting Energy and Heat from the Vacuum.”

Attempts to harness the Casimir and related effects for vacuum energy conversion are ongoing in our laboratory and elsewhere. The fact that its potential application to space propulsion has not gone unnoticed by the Air Force can be seen in its request for proposals for the FY-1986 Defense  SBIR Program. Under entry AF86-77, Air Force Rocket Propulsion  Laboratory (AFRPL), Topic: Non-Conventional Propulsion Concepts we f ind the statement: ” Bold,new non-conventional propulsion concepts are solicited…. The specific area sin which AFRPL is interested include…. (6) Esoteric  energy  sources  for propulsion  including  the zero point quantum dynamic energy of vacuum space.”

Several experimental formats for tapping the ZPE for practical use are under investigation in our laboratory. An early one of interest is based on the idea of a Casimir pinch effect in non-neutral plasmas, basically a plasma equivalent of Forward’s electromechanical charged-plate collapse. The underlying physics is described in a paper submitted for publication by myself and a colleague, and it is illustrative that the first of several patents issued to a consultant to our laboratory, K. R. Shoulders(1991), contains the descriptive phrase ” …energy  is provided… and the ultimate source of this energy appears to be the zero-point radiation of the vacuum continuum.” Another intriguing possibility is provided by the phenomenon of sonoluminescence, bubble collapse in an ultrasonically-driven fluid which is accompanied by intense, sub-nanosecond light radiation. Although the jury is still out as to the mechanism of light generation, Nobelist Julian Schwinger (1993) has argued for a Casimir interpretation. Possibly related experimental evidence for excess heat generation in ultrasonically-driven cavitation in heavy water is claimed in an EPRI Report by E-Quest Sciences ,although attributed to a nuclear micro-fusion process. Work is under way in our laboratory to see if this claim can be replicated.

Yet another proposal for ZPE extraction is described in a recent patent(Mead & Nachamkin, 1996). The approach proposes  the use of resonant dielectric spheres, slightly detuned from each other, to provide a beat-frequency downshift of the more energetic high-frequency components of the ZPE to a more easily captured form. We are discussing the possibility of a collaborative effort between us to determine whether such an approach is feasible. Finally, an approach utilizing micro-cavity techniques to perturb the ground state stability of atomic hydrogen is under consideration in our lab. It is based on a paper of mine (Puthoff, 1987) in which I put forth the hypothesis that then on radiative nature of the ground state  is due to a dynamic equilibrium in which radiation emitted due to accelerated electron ground state motion is compensated by absorption from the ZPE. If this hypothesis is true, there exists the potential for energy generation by the application of the techniques of so-called cavity quantum electrodynamics(QED). In cavity QED, excited atoms are passed through Casimir-like cavities whose structure suppresses electromagnetic cavity modes at the transition frequency between the atom’s excited and ground states. The result is that the so-called “spontaneous” emission time is lengthened considerably (for example, by factors of ten), simply because spontaneous emission is not so spontaneous after all, but rather is driven by vacuum fluctuations. Eliminate the modes, and you eliminate the zero point fluctuations of the modes, hence suppressing decay of the excited state. As stated in a review article on cavity QED in Scientific American, “An excited atom that would ordinarily emit a low-frequency photon can not do so, because there are no vacuum fluctuations to stimulate its emission….In its application to energy generation, mode suppression would be used to perturb the hypothesized dynamic ground state absorption/emission balance to lead to energy release.

An example in which Nature herself may have taken advantage of energetic vacuum effects is discussed in a model published by ZPE colleagues A. Rueda of California State University at Long Beach, B. Haisch of Lockheed-Martin,and D. Cole of IBM (1995). In a paper published in the Astrophysical Journal,they propose that the vast reaches of outer space constitute an ideal environment for ZPE acceleration of nuclei and thus provide a mechanism for “powering up” cosmic rays. Details of the model would appear to account for other observed phenomena as well, such as the formation of cosmic voids. This raises the possibility of utilizing a ” sub-cosmic-ray” approach to accelerate protons in a cryogenically-cooled, collision-free vacuum trap and thus extract energy from the vacuum fluctuations by this mechanism.

The Vacuum as the Source of Gravity and Inertia

What of the fundamental forces of gravity and inertia that we seek to overcome in space travel? We have phenomenological theories that describe their effects(Newton’s Laws and their relativistic generalizations), but what of their origins?

The first hint that these phenomena might themselves be traceable to roots in the underlying fluctuations of the vacuum came in a study published by the well-known Russian physicist Andrei Sakharov. Searching to derive Einstein’s phenomenological equations for general relativity from a more fundamental set of assumptions, Sakharov came to the conclusion that the entire panoply of general relativistic phenomena could be seen as induced effects brought about by changes  in the quantum-fluctuation energy of the vacuum due to the presence of matter. In this view the attractive gravitational force is more akin to the induced Casimir force discussed above, than to the fundamental inverse square law Coulomb force between charged particles with which it is often compared. Although speculative when first introduced by Sakharov this hypothesis has led to a rich and ongoing literature, including contributions of my own on quantum-fluctuation-induced gravity, aliterature that continues to yield deep insight into the role played by vacuum forces.

Given an apparent deep connection between gravity and the zero-point fluctuations of the vacuum, a similar connection must exist between these self same vacuum fluctuations and inertia. This is because it is an empirical fact that the gravitational and inertial masses have the same value, even though the underlying phenomena are quite disparate. Why, for example, should a measure of the resistance of a body to being accelerated, even if far from any gravitational  field, have the same value that  is associated with the gravitational attraction between bodies? Indeed, if one is determined by vacuum fluctuations, so must the other. To get to the heart of inertia, consider a specific example in which you are standing on a train in the station. As the train leaves the platform with a jolt, you could be thrown to the  floor. What  is  this force  that knocks you down,seemingly coming out of nowhere? This phenomenon, which we conveniently label inertia and go on about our physics, is a subtle feature of the universe that has perplexed generations of physicists from Newton to Einstein. Since in this example the sudden disquieting imbalance results from acceleration “relative to the fixed stars,” in its most provocative  form one could say that it was the “stars” that delivered the punch. This key feature was emphasized by the Austrian philosopher of science Ernst Mach, and is now known as Mach’s Principle. Nonetheless, the mechanism by which the stars might do this deed has eluded convincing explication.

Addressing this issue in a paper entitled “Inertia as a Zero-Point Field Lorentz Force,” my colleagues and I (Haisch, Rueda & Puthoff, 1994) were successful in tracing the problem of inertia and its connection to Mach’s Principle to the ZPE properties of the vacuum. In a sentence, although a uniformly moving body does not experience a drag  force from  the  (Lorentz-invariant)vacuum fluctuations, an accelerated body meets a resistance (force) proportional to the acceleration. By accelerated we mean, of course, accelerated relative to the fixed stars. It turns out that an argument can be made that the quantum fluctuations of distant matter structure the local vacuum-fluctuation frame of reference. Thus, in the example of the train the punch was delivered by the wall of vacuum fluctuations acting as a proxy for the fixed stars through which one attempted to accelerate.

The implication for space travel is this: Given the evidence generated in the field of cavity QED (discussed above), there is experimental evidence that vacuum  fluctuations can be altered by technological means. This leads to the corollary that, in principle, gravitational and inertial masses can also be altered. The possibility of altering mass with a view to easing the energy burden of future spaceships has been seriously considered by the Advanced Concepts Office of the Propulsion Directorate of the Phillips Laboratory at Edwards AirForce Base. Gravity researcher Robert Forward accepted an assignment to review this concept. His deliverable product was to recommend a broad, multipronged ef fort involving laboratories from around the world to investigate the inertia model experimentally. The Abstract reads in part:

Many researchers see the vacuum as a central ingredient of 21st-Century physics ….Some even believe the vacuum may be harnessed to provide a limitless supply of energy. This report summarizes an attempt to find an experiment that would test the Haisch,Rueda and Puthoff (HRP) conjecture that the mass and inertia of a body are induced effects brought about by changes in the quantum-fluctuation energy of the vacuum…. It was possible to find an experiment that might be able to prove or disprove that the inertial mass of a body can be altered by making changes in the vacuum surrounding the body.

With regard to action items, Forward in fact recommends a ranked list of not one but four experiments to be carried out to address the ZPE-inertia conceptand its broad implications. The recommendations included investigation of the proposed “sub-cosmic-ray energy device” mentioned earlier, and the investigation of an hypothesized “inertia-wind” effect proposed by our laboratory and possibly detected in early experimental work,though the latter possibility is highly speculative at this point.

Engineering the Vacuum For “Warp Drive”

Perhaps one of the most speculative, but nonetheless scientifically-grounded, proposals of all is the so-called Alcubierre Warp Drive. Taking on the challenge of determining whether Warp Drive a là Star Trek was a scientific possibility, general relativity theorist Miguel Alcubierre of the University of  Wales set himself  the task of determining whether faster-than light travel was possible within the constraints of standard theory. Although such clearly could not be the case in the flat space of special relativity, general relativity permits consideration of altered space time metrics where such a possibility is not a priori ruled out. Alcubierre ’s further self-imposed constraints on an acceptable solution included the requirements that no net time distortion should occur (breakfast on Earth, lunch on Alpha Centauri, and home for dinner with your wife and children, not your great-great-great grandchildren), and that the occupants of the spaceship were not to be flattened against the bulkhead by unconscionable accelerations.

A solution meeting all of the above requirements was found and published by Alcubierre in Classical and Quantum Gravity in 1994. The solution discovered by Alcubierre involved the creation of a local distortion of space time such that space time is expanded behind the spaceship, contracted ahead of it, and yields a hypersurfer-like motion faster than the speed of light as seen by observers outside the disturbed region. In essence, on the outgoing leg of its journey the spaceship is pushed away from Earth and pulled towards its distant destination by the engineered local expansion of space time itself. For followup on the broader aspects of “metric engineering” concepts, one can refer to apaper published  by myself  in Physics Essays (Puthoff,  1996). Interestingly enough, the engineering requirements rely on the generation of macroscopic,negative-energy-density, Casimir-like states in the quantum vacuum of the type discussed earlier. Unfortunately,meeting such requirements is beyond technological reach without some unforeseen breakthrough.

Related, of course, is the knowledge that general relativity permits the possibility of wormholes, topological tunnels which in principle could connect distant parts of the universe, a cosmic subway so to speak. Publishing in the American Journal of Physics, theorists Morris and Thorne initially outlined in some detail the requirements for traversible wormholes and have found that, in principle, the possibility exists provided one has access to Casimir-like, negative-energy-density quantum vacuum states. This has led to a rich literature, summarized recently in a book by Matt Visser of Washington University. Again, the technological requirements appear out of reach for the foreseeable future, perhaps awaiting new techniques for cohering the ZPE vacuum fluctuations in order to meet the energy-density requirements.

Where does this leave us? As we peer into the heavens from the depth of our gravity well, hoping for some “magic” solution that will launch our spacefarers first to the planets and then to the stars, we are reminded of Arthur C. Clarke’s phrase that highly-advanced technology is essentially indistinguishable from magic. Fortunately, such magic appears to be waiting in the wings of our deepening understanding of the quantum vacuum in which we live.

[Ad: Can the Vacuum Be Engineered for Space Flight Applications? Overview of  Theory and Experiments By H. E. Puthoff(PDF)]

Coronal Heating Mystery Explained

Among the many constantly moving, appearing, disappearing and generally explosive events in the sun’s atmosphere, there exist giant plumes of gas — as wide as a state and as long as Earth — that zoom up from the sun’s surface at 150,000 miles per hour. Known as spicules, these are one of several phenomena known to transfer energy and heat throughout the sun’s magnetic atmosphere, or corona.

Thanks to NASA’s Solar Dynamics Observatory (SDO) and the Japanese satellite Hinode, these spicules have recently been imaged and measured better than ever before, showing them to contain hotter gas than previously observed. Thus, they may perhaps play a key role in helping to heat the sun’s corona to a staggering million degrees or more. (A number made more surprising since the sun’s surface itself is only about 10,000 degrees Fahrenheit.)   Just what makes the corona so hot is a poorly understood aspect of the sun’s complicated space weather system. That system can reach Earth, causing auroral lights and, if strong enough, disrupting Earth’s communications and power systems. Understanding such phenomena, therefore, is an important step towards better protecting our satellites and power grids. Solar physicist Dean Pesnell said:

The traditional view is that all heating happens higher up in the corona.  The suggestion in this paper is that cool gas is ejected from the sun’s surface in spicules and gets heated on its way to the corona. This doesn’t mean the old view has been completely overturned, but this is a strong suggestion that part of the spicule material gets heated to very high temperatures and provides some coronal heating.

Spicules were first named in the 1940s, but were hard to study in detail until recently, says Bart De Pontieu of Lockheed Martin’s Solar and Astrophysics Laboratory, Palo Alto, Calif. whose work on this subject appears in the January 7, 2011 issue of Science magazine. In visible light, spicules can be seen to send large masses of so-called plasma – the electromagnetic gas that surrounds the sun — up through the lower solar atmosphere or photosphere. The amount of material sent up is stunning, some 100 times as much as streams away from the sun in the solar wind towards the edges of the solar system. But nobody knew if they contained hot gas.

[Image  Details:  Spicules on the sun, as observed by the Solar Dynamics Observatory. These bursts of gas jet off the surface of the sun at 150,000 miles per hour and contain gas that reaches temperatures over a million degrees. Credit: NASA Goddard/SDO/AIA]

Now, De Pontieu’s team — which included researchers at Lockheed Martin, the High Altitude Observatory of the National Center for Atmospheric Research (NCAR) in Colorado and the University of Oslo, Norway — was able to combine images from SDO and Hinode to produce a more complete picture of the gas inside these gigantic fountains. Tracking the movement and temperature of spicules relies on successfully identifying the same phenomenon in all the images. One complication comes from the fact that different instruments “see” gas at different temperatures. Pictures from Hinode in the visible light range, for example, show only cool gas, while pictures that record UV light show gas that is up to several million degrees.

To show that the previously known cool gas in a spicule lies side by side to some very hot gas requires showing that the hot and cold gas in separate images are located in the same space. Each spacecraft offered specific advantages to help confirm that one was seeing the same event in multiple images. First, Hinode: In 2009, scientists used observations from Hinode and telescopes on Earth to, for the first time, identify a spicule when looking at it head-on. (Imagine how tough it is, looking from over 90 million miles away, to determine that you’re looking at a fountain when you only have a top-down view instead of a side view.) The top-down view of a spicule ensures an image with less extraneous solar material between the camera and the fountain, thus increasing confidence that any observations of hotter gas are indeed part of the spicule itself.

The second aid to tracking a single spicule is SDO’s ability to capture an image of the sun every 12 seconds. “You can track things from one image to the next and know you’re looking at the same thing in a different spot,” says Pesnell. “If you had an image only every 12 minutes, then you couldn’t be sure that what you’re looking at is the same event, since you didn’t watch its whole history.”

 

[Image Details: Artist’s concept of the Solar Dynamics Observatory. 05/12/08 Credit: NASA/Goddard Space Flight Center Conceptual Image Lab]

Bringing these tools together, scientists could compare simultaneous images in SDO and Hinode to create a much more complete image of spicules. They found that much of the gas is heated to a hundred thousand degrees, while a small fraction of the gas is heated to millions of degrees. Time-lapsed images show that this hot material spews high up into the corona, with much of it falling back down towards the surface of the sun. However, the small fraction of the gas that is heated to millions of degrees does not immediately return to the surface.”Given the large number of spicules on the Sun, and the amount of material in the spicules, if even some of that super hot plasma stays aloft it would make a fair contribution to coronal heating,” says Scott McIntosh from NCAR, who is part of the research team.

Of course, De Pontieu cautions that this does not yet solve the coronal heating mystery. The main result, he says, is that they’re challenging theorists to incorporate the possibility that some coronal heating occurs at lower heights in the solar atmosphere. His next step is to help figure out how much of a role spicules play by studying how spicules form, how they move so quickly, how they get heated to such high temperatures in a short time, and how much mass stays up in the corona. Astrophysicist Jonathan Cirtain, who is the U.S. project scientist for Hinode at NASA’s Marshall Space Flight Center, Huntsville, Ala. points out that incorporating such new information helps address an important question that reaches far beyond the sun. “This breakthrough in our understanding of the mechanisms which transfer energy from the solar photosphere to the corona addresses one of the most compelling questions in stellar astrophysics: How is the atmosphere of a star heated?” he says. “This is a fantastic discovery, and demonstrates the muscle of the NASA Heliophysics System Observatory, comprised of numerous instruments on multiple observatories.”

 Hinode is the second mission in NASA’s Solar Terrestrial Probes program, the goal of which is to improve understanding of fundamental solar and space physics processes. The mission is led by the Japan Aerospace Exploration Agency (JAXA) and the National Astronomical Observatory of Japan (NAOJ). The collaborative mission includes the U.S., the United Kingdom, Norway and Europe. NASA Marshall manages Hinode U.S. science operations and oversaw development of the scientific instrumentation provided for the mission by NASA, academia and industry. The Lockheed Martin Advanced Technology Center is the lead U.S. investigator for the Solar Optical Telescope on Hinode.
  
 [Source: NASA]

Wormhole Induction Propulsion and Interstellar Travel: A Brief Review

Though it seems impossible to colonize galaxy at sub-light speed but without FTL travel we can still colonise the universe at sub-light velocities[ using self replicating probes and Bioprograms which I’ve discussed earlier], but the resulting colonies are separated from each other by the vastness of interstellar space. In the past trading empires have coped with time delays on commerce routes of the order of a few years at most. This suggests that economic zones would find it difficult to encompass more than one star system. Travelling beyond this would require significant re-orientation upon return, catching up with cultural changes etc. It’s unlikely people would routinely travel much beyond this and return.

wormhole could be constructed, by confining exotic matter to narrow regions to form the edges of three-dimensional space like a cube. The faces of the cube would resemble mirrors, except that the image is of the view from the other end of the wormhole. Although there is only one cube of material, it appears at two locations to the external observer. The cube links two ‘ends’ of a wormhole together. A traveller, avoiding the edges and crossing through a face of one of the cubes, experiences no stresses and emerges from the corresponding face of the other cube. The cube has no interior but merely facilitates passage from ‘one’ cube to the ‘other’.

The exotic nature of the edge material requires negative energy density and tension/pressure. But the laws of physics do not forbid such materials. The energy density of the vacuum may be negative, as is the Casimir field between two narrow conductors. Negative pressure fields, according to standard astrophysics, drove the expansion of the universe during its ‘inflationary’ phase. Cosmic string (another astrophysical speculation) has negative tension. The mass of negative energy the wormhole needs is just the amount to form a black hole if it were positive, normal energy. A traversable wormhole can be thought of as the negative energy counterpart to a black hole, and so justifies the appellation ‘white’ hole. The amount of negative energy required for a traversable wormhole scales with the linear dimensions of the wormhole mouth. A one meter cube entrance requires a negative mass of roughly 10^27 kg.

The problem with negative energy, employing as propulsion material, is that it’s too pesky to manage high energy densities of negative energy. Rapid interplanetary and interstellar space flight by means of spacetime wormholes is possible, in principle, whereby the traditional rocket propulsion approach can be abandoned in favor of a new paradigm involving the use of spacetime manipulation. In this scheme, the light speed barrier becomes irrelevant and spacecraft no longer need to carry large mass fractions of traditional chemical or nuclear propellants and related infrastructure over distances larger than several astronomical units (AU). Travel time over very large distances will be reduced by orders of magnitude.

In a previous work by Maccon, it was proposed that ultra-high magnetic field could generate a significant curvature in space-time fabric that could suffice a spacecraft to go through it. More specifically, Maccone claims that static homogeneous magnetic/electric fields with cylindrical symmetry can create spacetime curvature which manifests itself as a traversable wormhole. Although the claim of inducing spacetime curvature is correct, Levi-Civita’s metric solution is not a wormhole.[ref]

It is speculated that future WHIP spacecraft could deploy ultrahigh magnetic fields along with exotic matter- energy fields (e.g. radial electric or magnetic fields, Casimir energy field, etc.) in space to create a wormhole and then apply conventional space propulsion to move through the throat to reach the other side in a matter of minutes or days, whence the spacecraft emerges several AU’s or light-years away from its starting point. The requirement for conventional propulsion in WHIP spacecraft would be strictly limited by the need for short travel through the wormhole throat as well as for orbital maneuvering near distant worlds. The integrated system comprising the magnetic induction/exotic field wormhole and conventional propulsion units could be called WHIPIT or “Wormhole Induction Propulsion Integrated Technology.”

It is based on the concept that magnetic field can lead to distortion in space-time fabric governed by following equation:

where a= radius of curvature of  space-time

and B=MAGNETIC FIELD.

Further ‘B’ can be calculated from the equation:

where K=3.4840*10+18 and known as radius of curvature constant. ν0 is the gravitationally induced variation of light speed within the curvature region or say speed of spacecraft. ‘L’ is the length of solenoid.

[Technical Issues: Quoted]

Traversable wormholes are creatures of classical GTR and represent non-trivial topology change in the spacetime manifold. This makes mathematicians cringe because it raises the question of whether topology can change or fluctuate to accommodate wormhole creation. Black holes and naked singularities are also creatures of GTR representing non-trivial topology change in spacetime, yet they are accepted by the astrophysics and mathematical communities — the former by Hubble Space Telescope discoveries and the latter by theoretical arguments due to Kip Thorne, Stephen Hawking, Roger Penrose and others. The Bohm-Aharonov effect is another example which owes its existence to non-trivial topology change in the manifold. The topology change (censorship) theorems discussed in Visser (1995) make precise mathematical statements about the “mathematician’s topology” (topology of spacetime is fixed!), however, Visser correctly points out that this is a mathematical abstraction. In fact, Visser (1990) proved that the existence of an everywhere Lorentzian metric in spacetime is not a sufficient condition to prevent topology change. Furthermore, Visser (1990, 1995) elaborates that physical probes are not sensitive to this mathematical abstraction, but instead they typically couple to the geometrical features of space. Visser (1990) also showed that it is possible for geometrical effects to mimic the effects of topology change. Topology is too limited a tool to accurately characterize a generic traversable wormhole; in general one needs geometric information to detect the presence of a wormhole, or more precisely to locate the wormhole throat (Visser, private communication, 1997).

Levi-Civita’s spacetime metric is simply a hypercylinder with a position dependent gravitational potential: no asymptotically flat region, no flared-out wormhole mouth and no wormhole throat. Maccone’s equations for the radial (hyperbolic) pressure, stress and energy density of the “magnetic wormhole” configuration are thus incorrect.

What a wormhole mouth might look like to space travelers.

In addition, directing attention on the behavior of wormhole geometry at asymptotic infinity is not too profitable. Visser (private communication, 1997; Hochberg and Visser, 1997) demonstrates that it is only the behavior near the wormhole throat that is critical to understanding what is going on, and that a generic throat can be defined without having to make all the symmetry assumptions and without assuming the existence of an asymptotically flat spacetime to embed the wormhole in. One only needs to know the generic features of the geometry near the throat in order to guarantee violations of the null energy condition (NEC; see Hawking and Ellis, 1973) for certain open regions near the throat (Visser, private communication, 1997). There are general theorems of differential geometry that guarantee that there must be NEC violations (meaning exotic matter-energy is present) at a wormhole throat. In view of this, however, it is known that static radial electric or magnetic fields are borderline exotic when threading a wormhole if their tension were infinitesimally larger, for a given energy density (Herrmann, 1989; Hawking and Ellis, 1973). Other exotic (energy condition violating) matter-energy fields are known to be squeezed states of the electromagnetic field, Casimir (electromagnetic zero-point) energy and other quantum fields/states/effects. With respect to creating wormholes, these have the unfortunate reputation of alarming physicists. This is unfounded since all the energy condition hypotheses have been experimentally tested in the laboratory and experimentally shown to be false — 25 years before their formulation (Visser, 1990 and references cited therein). Violating the energy conditions commits no offense against nature.

Interstellar Travel and WHIP[Wormhole Induction Propulsion

WHIP spacecraft will have multifunction integrated technology for propulsion. The Wormhole Induction Propulsion Integrated Technology (WHIPIT) would entail two modes. The first mode is an advanced conventional system (chemical, nuclear fission/fusion, ion/plasma, antimatter, etc.) which would provide propulsion through the wormhole throat, orbital maneuvering capability near stellar or planetary bodies, and spacecraft attitude control and orbit corrections. An important system driver affecting mission performance and cost is the overall propellant mass-fraction required for this mode. A desirable constraint limiting this to acceptable (low) levels should be that an advanced conventional system would regenerate its onboard fuel supply internally or that it obtain and process its fuel supply from the situ space environment. Other important constraints and/or performance requirements to consider for this propulsion mode would include specific impulse, thrust, energy conversion schemes, etc.

Hypothetical view of two wormhole mouths patched to a hypercylinder curvature envelope. The small (large) configuration results from the radius of curvature induced by a larger (smaller) ultrahigh magnetic field.

The second WHIPIT mode is the stardrive component. This would provide the necessary propulsion to rapidly move the spacecraft over interplanetary or interstellar distances through a traversable wormhole. The system would generate a static, cylindrically symmetric ultrahigh magnetic field to create a hypercylinder curvature envelope (gravity well) near the spacecraft to pre-stress space into a pseudo-wormhole configuration. The radius of the hypercylinder envelope should be no smaller than the largest linear dimension of the spacecraft. As the spacecraft is gravitated into the envelope, the field-generator system then changes the cylindrical magnetic field into a radial configuration while giving it a tension that is greater than its energy density. A traversable wormhole throat is then induced near the spacecraft where the hypercylinder and throat geometries are patched together. The conventional propulsion mode then kicks on to nudge the spacecraft through the throat and send its occupants on their way to adventure. This scenario would apply if ultrahigh electric fields were employed instead. If optimization of wormhole throat (geometry) creation and hyperspace tunneling distance requires a fully exotic energy field to thread the throat, then the propulsion system would need to be capable of generating and deploying a Casimir (or other exotic) energy field. Although ultrahigh magnetic/electric and exotic field generation schemes are speculative and will be left for future work.

Practical Approach

The equasuggest a way to perform a laboratory experiment whereby one could apply a powerful static homogeneous (cylindrically symmetric) magnetic field in a vacuum, thereby creating spacetime curvature in principle, and measure the speed of a light beam through it. A measurable slowing of c in this arrangement would demonstrate that a curvature effect has been created in the experiment.

From Table I, it is apparent that laboratory magnetic field strengths would need to be > 109 – 1010 so that a significant radius of curvature and slowing of c can be measured. Experiments employing chemical explosive/implosive magnetic technologies would be an ideal arrangement for this. The limit of magnetic field generation for chemical explosives/implosives is and the quantum limit for ordinary metals is ~ 50,000 Tesla. Explosion/implosion work done by Russian (MC-1 generator, ISTC grant), Los Alamos National Lab (ATLAS), National High Magnetic Field Lab and Sandia National Lab (SATURN) investigators have employed magnetic solenoids of good homogeneity with lengths of ~ 10 cm, having peak rate-of-rise of field of 109 where a few nanoseconds is spent at 1000 Tesla, and which is long enough for a good measurement of c . Further, with picosecond pulses, c could be measured to a part in 102 or 103. At 1000 Tesla, c^2 – v^2(0) 0 m^2/sec^2 and the radius of curvature is 0.368  light-years.  If the peak rate-of-rise of field (~ 10^9 Tesla/sec) can be used, then a radius of curvature £ several *10^6 km can be generated along with c^2 – v^2(0) ≥ several * 10^4 m^2/sec^2.

It will be necessary to consider advancing the state-of-art of magnetic induction technologies in order to reach static field strengths that are > 109 – 1010Tesla. Extremely sensitive measurements of c at the one part in 106 or 107 level may be necessary for laboratory experiments involving field strengths of~ 109 Tesla. Magnetic induction technologies based on nuclear explosives/implosives may need to be seriously considered in order to achieve large magnitude results. An order of magnitude calculation indicates that magnetic fields generated by nuclear pulsed energy methods could be magnified to (brief) static values of ³ 109 Tesla by factors of the nuclear-to-chemical binding energy ratio (³ 106). Other experimental methods employing CW lasers, repetitive-pulse free electron lasers, neutron beam-pumped UO2 lasers, pulsed laser-plasma interactions or pulsed hot (theta pinch) plasmas either generate insufficient magnetic field strengths for our purposes or cannot generate them at all within their operating modes.[Ref]

Tha’s why I find it quite interesting and whenever we would be capable of generating such a high magnetic field, I think it would be a prevailing propulsion technology. This effect can be used to create a wormhole by patching the hypercylinder envelope to a throat that is induced by either radially stressing the ultrahigh field or employing additional exotic energy.

[Ref: Wormhole Induction Propulsion (WHIP) by Eric W. Davis and Maccone, C. (1995) “Interstellar Travel Through Magnetic Wormholes”, JBIS, Vol. 48, No. 11, pp. 453-458]

Time Travel Paradoxes

By David Lewis

TIME travel, I maintain, is possible. The paradoxes of time travel are oddities, not impossibilities. They prove only this much, which few would have doubted: that a possible world where time travel took place would be a most strange world, different in fundamental ways from the world we think is ours. I shall be concerned here with the sort of time travel that is recounted in science fiction. Not all science fiction writers are clear-headed, to be sure, and inconsistent time travel stories have often been written. But some writers have thought the problems through with great care, and their stories are perfectly consistent.

If I can defend the consistency of some science fiction stories of time travel, then I suppose parallel defenses might be given of some controversial physical hypotheses, such as the hypothesis that time is circular or the hypothesis that there are particles that travel faster than light. But I shall not explore these parallels here. What is time travel? Inevitably, it involves a discrepancy between time and time. Any traveler departs and then arrives at his destination; the time elapsed from departure to arrival (positive, or perhaps zero) is the duration of the journey. But if he is a time traveler, the separation in time between departure and arrival does not equal the duration of his journey. He departs; he travels for an hour, let us say; then he arrives. The time he reaches is not the time one hour after his departure. It is later, if he has traveled toward the future; earlier, if he has traveled toward the past. If he has traveled far toward the past, it is earlier even than his departure. How can it be that the same two events, his departure and his arrival, are separated by two unequal amounts of time? It is tempting to reply that there must be two independent time dimensions; that for time travel to be possible, time must be not a line but a plane.2 Then a pair of events may have two unequal separations if they are separated more in one of the time dimensions than in the other. The lives of common people occupy straight diagonal lines across the plane of time, sloping at a rate of exactly one hour of time1 per hour of time. The life of the time traveler occupies a bent path, of varying slope.

On closer inspection, however, this account seems not to give us time travel as we know it from the stories. When the traveler revisits the days of his childhood, will his playmates be there to meet him? No; he has not reached the part of the plane of time where they are. He is no longer separated from them along one of the two dimensions of time, but he is still separated from them along the other. I do not say that two-dimensional time is impossible, or that there is no way to square it with the usual conception of what time travel would be like. Nevertheless I shall say no more about two-dimensional time. Let us set it aside, and see how time travel is possible even in one-dimensional time.

The world—the time traveler’s world, or ours—is a four-dimensional manifold of events. Time is one dimension of the four, like the spatial dimensions except that the prevailing laws of nature discriminate between time and the others—or rather, perhaps, between various timelike dimensions and various spacelike dimensions. (Time remains one-dimensional, since no two timelike dimensions are orthogonal.) Enduring things are timelike streaks: wholes composed of temporal parts, or stages, located at various times and places. Change is qualitative difference between different stages—different temporal parts—of some enduring thing, just as a “change” in scenery from east to west is a qualitative difference between the eastern and western spatial parts of the landscape. If this paper should change your mind about the possibility of time travel, there will be a difference of opinion between two different temporal parts of you, the stage that started reading and the subsequent stage that finishes. If change is qualitative difference between temporal parts of something, then what doesn’t have temporal parts can’t change. For instance, numbers can’t change; nor can the events of any moment of time, since they cannot be subdivided into dissimilar temporal parts. (We have set aside the case of two-dimensional time, and hence the possibility that an event might be momentary along one time dimension but divisible along the other.) It is essential to distinguish change from “Cambridge change,” which can befall anything. Even a number can “change” from being to not being the rate of exchange between pounds and dollars. Even a momentary event can “change” from being a year ago to being a year and a day ago, or from being forgotten to being remembered. But these are not genuine changes. Not just any old reversal in truth value of a time-sensitive sentence about something makes a change in the thing itself.

A time traveler, like anyone else, is a streak through the manifold of space-time, a whole composed of stages located at various times and places. But he is not a streak like other streaks. If he travels toward the past he is a zig-zag streak, doubling back on himself. If he travels toward the future, he is a stretched-out streak. And if he travels either way instantaneously, so that there are no intermediate stages between the stage that departs and the stage that arrives and his journey has zero duration, then he is a broken streak. I asked how it could be that the same two events were separated by two unequal amounts of time, and I set aside the reply that time might have two independent dimensions. Instead I reply by distinguishing time itself, external time as I shall also call it, from the personal time of a particular time traveler: roughly, that which is measured by his wristwatch. His journey takes an hour of his personal time, let us say; his wristwatch reads an hour later at arrival than at departure. But the arrival is more than an hour after the departure in external time, if he travels toward the future; or the arrival is before the departure in external time (or less than an hour after), if he travels toward the past. That is only rough. I do not wish to define personal time operationally, making wristwatches infallible by definition. That which is measured by my own wristwatch often disagrees with external time, yet I am no time traveler; what my misregulated wristwatch measures is neither time itself nor my personal time. Instead of an operational definition, we need a functional definition of personal time; it is that which occupies a certain role in the pattern of events that comprise the time traveler’s life. If you take the stages of a common person, they manifest certain regularities with respect to external time. Properties change continuously as you go along, for the most part, and in familiar ways. First come infantile stages. Last come senile ones. Memories accumulate. Food digests. Hair grows. Wristwatch hands move.

If you take the stages of a time traveler instead, they do not manifest the common regularities with respect to external time. But there is one way to assign coordinates to the time traveler’s stages, and one way only (apart from the arbitrary choice of a zero point), so that the regularities that hold with respect to this assignment match those that commonly hold with respect to external time. With respect to the correct assignment properties change continuously as you go along, for the most part, and in familiar ways. First come infantile stages. Last come senile ones. Memories accumulate. Food digests. Hair grows. Wristwatch hands move. The assignment of coordinates that yields this match is the time traveler’s personal time. It isn’t really time, but it plays the role in his life that time plays in the life of a common person. It’s enough like time so that we can—with due caution— transplant our temporal vocabulary to it in discussing his affairs. We can say without contradiction, as the time traveler prepares to set out, “Soon he will be in the past.”

 

SVG version of http://en.wikipedia.org/wiki/Im...

Image via Wikipedia

 

We mean that a stage of him is slightly later in his personal time, but much earlier in external time, than the stage of him that is present as we say the sentence. We may assign locations in the time traveler’s personal time not only to his stages themselves but also to the events that go on around him. Soon Caesar will die, long ago; that is, a stage slightly later in the time traveler’s personal time than his present stage, but long ago in external time, is simultaneous with Caesar’s death. We could even extend the assignment of personal time to events that are not part of the time traveler’s life, and not simultaneous with any of his stages. If his funeral in ancient Egypt is separated from his death by three days of external time and his death is separated from his birth by three score years and ten of his personal time, then we may add the two intervals and say that his funeral follows his birth by three score years and ten and three days of extended personal time. Likewise a bystander might truly say, three years after the last departure of another famous time traveler, that “he may even now—if I may use the phrase—be wandering on some plesiosaurus- haunted oolitic coral reef, or beside the lonely saline seas of the Triassic Age.” If the time traveler does wander on an oolitic coral reef three years after his departure in his personal time, then it is no mistake to say with respect to his extended personal time that the wandering is taking place “even now”. We may liken intervals of external time to distances as the crow flies, and intervals of personal time to distances along a winding path. The time traveler’s life is like a mountain railway. The place two miles due east of here may also be nine miles down the line, in the westbound direction. Clearly we are not dealing here with two independent dimensions. Just as distance along the railway is not a fourth spatial dimension, so a time traveler’s personal time is not a second dimension of How far down the line some place is depends on its location in three-dimensional space, and likewise the locations of events in personal time depend on their locations in one-dimensional external time. Five miles down the line from here is a place where the line goes under a trestle; two miles further is a place where the line goes over a trestle; these places are one and the same. The trestle by which the line crosses over itself has two different locations along the line, five miles down from here and also seven. In the same way, an event in a time traveler’s life may have more than one location in his personal time. If he doubles back toward the past, but not too far, he may be able to talk to himself. The conversation involves two of his stages, separated in his personal time but simultaneous in external time. The location of the conversation in personal time should be the location of the stage involved in it. But there are two such stages; to share the locations of both, the conversation must be assigned two different locations in personal time.

The more we extend the assignment of personal time outwards from the time traveler’s stages to the surrounding events, the more will such events acquire multiple locations. It may happen also, as we have already seen, that events that are not simultaneous in external time will be assigned the same location in personal time—or rather, that at least one of the locations of one will be the same as at least one of the locations of the other. So extension must not be carried too far, lest the location of events in extended personal time lose its utility as a means of keeping track of their roles in the time traveler’s history. A time traveler who talks to himself, on the telephone perhaps, looks for all the world like two different people talking to each other. It isn’t quite right to say that the whole of him is in two places at once, since neither of the two stages involved in the conversation is the whole of him, or even the whole of the part of him that is located at the (external) time of the conversation. What’s true is that he, unlike the rest of us, has two different complete stages located at the same time at different places. What reason have I, then, to regard him as one person and not two? What unites his stages, including the simultaneous ones, into a single person?

The problem of personal identity is especially acute if he is the sort of time traveler whose journeys are instantaneous, a broken streak consisting of several unconnected segments. Then the natural way to regard him as more than one person is to take each segment as a different person. No one of them is a time traveler, and the peculiarity of the situation comes to this: all but one of these several people vanish into thin air, all but another one appear out of thin air, and there are remarkable resemblances between one at his appearance and another at his vanishing. Why isn’t that at least as good a description as the one I gave, on which the several segments are all parts of one time traveler? I answer that what unites the stages (or segments) of a time traveler is the same sort of mental, or mostly mental, continuity and connectedness that unites anyone else. The only difference is that whereas a common person is connected and continuous with respect to external time, the time traveler is connected and continuous only with respect to his own personal time. Taking the stages in order, mental (and bodily) change is mostly gradual rather than sudden, and at no point is there sudden change in too many different respects all at once. (We can include position in external time among the respects we keep track of, if we like. It may change discontinuously with respect to personal time if not too much else changes discontinuously along with it.) Moreover, there is not too much change altogether. Plenty of traits and traces last a lifetime. Finally, the connectedness and the continuity are not accidental. They are explicable; and further, they are explained by the fact that the properties of each stage depend causally on those of the stages just before in personal time, the dependence being such as tends to keep things the same. To see the purpose of my final requirement of causal continuity, let us see how it excludes a case of counterfeit time travel. Fred was created out of thin air, as if in the midst of life; he lived a while, then died. He was created by a demon, and the demon had chosen at random what Fred was to be like at the moment of his creation. Much later someone else, Sam, came to resemble Fred as he was when first created. At the very moment when the resemblance became perfect, the demon destroyed Sam.

Fred and Sam together are very much like a single person: a time traveler whose personal time starts at Sam’s birth, goes on to Sam’s destruction and Fred’s creation, and goes on from there to Fred’s death. Taken in this order, the stages of Fred-cum Sam have the proper connectedness and continuity. But they lack causal continuity, so Fred-cum-Sam is not one person and not a time traveler. Perhaps it was pure coincidence that Fred at his creation and Sam at his destruction were exactly alike; then the connectedness and continuity of Fred-cum-Sam across the crucial point are accidental. Perhaps instead the demon remembered what Fred was like, guided Sam toward perfect resemblance, watched his progress, and destroyed him at the right moment. Then the connectedness and continuity of Fred-cum-Sam has a causal explanation, but of the wrong sort. Either way, Fred’s first stages do not depend causally for their properties on Sam’s last stages. So the case of Fred and Sam is rightly disqualified as a case of personal identity and as a case of time travel.

We might expect that when a time traveler visits the past there will be reversals of causation. You may punch his face before he leaves, causing his eye to blacken centuries ago. Indeed, travel into the past necessarily involves reversed causation. For time travel requires personal identity—he who arrives must be the same person who departed. That requires causal continuity, in which causation runs from earlier to later stages in the order of personal time. But the orders of personal and external time disagree at some point, and there we have causation that runs from later to earlier stages in the order of external time. Elsewhere I have given an analysis of causation in terms of chains of counterfactual dependence, and I took care that my analysis would not rule out casual reversal a priori. I think I can argue (but not here) that under my analysis the direction of counterfactual dependence and causation is governed by the direction of other de facto asymmetries of time. If so, then reversed causation and time travel are not excluded altogether, but can occur only where there are local exceptions to these asymmetries. As I said at the outset, the time traveler’s world would be a most strange one. Stranger still, if there are local—but only local causal reversals, then there may also be causal loops: closed causal chains in which some of the causal links are normal in direction and others are reversed. (Perhaps there must be loops if there is reversal: I am not sure.) Each event on the loop has a causal explanation, being caused by events elsewhere on the loop. That is not to say that the loop as a whole is caused or explicable. It may not be. Its inexplicability is especially remarkable if it is made up of the sort of causal processes that transmit information. Recall the time traveler who talked to himself. He talked to himself about time travel, and in the course of the conversation his older self told his younger self how to build a time machine. That information was available in no other way. His older self knew how because his younger self had been told and the information had been preserved by the causal processes that constitute recording, storage, and retrieval of memory traces. His younger self knew, after the conversation, because his older self had known and the information had been preserved by the causal processes that constitute telling.

But where did the information come from in the first place? Why did the whole affair happen? There is simply no answer. The parts of the loop are explicable, the whole of it is not. Strange! But not impossible, and not too different from inexplicabilities we are already inured to. Almost everyone agrees that God, or the Big Bang, or the entire infinite past of the universe, or the decay of a tritium atom, is uncaused and inexplicable. Then if these are possible, why not also the inexplicable causal loops that arise in the time travel? I have committed a circularity in order not to talk about too much at once, and this is a good place to set it right. In explaining personal time, I presupposed that we were entitled to regard certain stages as comprising a single person. Then in explaining what united the stages into a single person, I presupposed that we were given a personal time order for them. The proper way to proceed is to define personhood and personal time simultaneously, as follows. Suppose given a pair of an aggregate of persona-stages, regarded as a candidate for personhood, and an assignment of coordinates to those stages, regarded as a candidate for his personal time. If the stages satisfy the conditions given in my circular explanation with respect to the assignment of coordinates, then both candidates succeed: the stages do comprise a person and the assignment is his personal time.

I have argued so far that what goes on in a time travel story may be a possible pattern of events in four dimensional space-time with no extra time dimension; that it may be correct to regard the scattered stages of the alleged time traveler as comprising a single person; and that we may legitimately assign to those stages and their surroundings a personal time order that disagrees sometimes with their order in external time. Some might concede all this, but protest that the impossibility of time travel is revealed after all when we ask not what the time traveler does, but what he could do. Could a time traveler change the past? It seems not: the events of a past moment could no more change than numbers could. Yet it seems that he would be as able as anyone to do things that would change the past if he did them. If a time traveler visiting the past both could and couldn’t do something that would change it, then there cannot possibly be such a time traveler. Consider Tim. He detests his grandfather, whose success in the munitions trade built the family fortune that paid for Tim’s time machine. Tim would like nothing so much as to kill Grandfather, but alas he is too late. Grandfather died in his bed in 1957, while Tim was a young boy. But when Tim has built his time machine and traveled to 1920, suddenly he realizes that he is not too late after all. He buys a rifle; he spends long hours in target practice; he shadows Grandfather to learn the route of his daily walk to the munitions works; he rents a room along the route; and there he lurks, one winter day in 1921, rifle loaded, hate in his heart, as Grandfather walks closer, closer,. . . .

Tim can kill Grandfather. He has what it takes. Conditions are perfect in every way: the best rifle money could buy, Grandfather an easy target only twenty yards away, not a breeze, door securely locked against intruders. Tim a good shot to begin with and now at the peak of training, and so on. What’s to stop him? The forces of logic will not stay his hand! No powerful chaperone stands by to defend the past from interference. (To imagine such a chaperone, as some authors do, is a boring evasion, not needed to make Tim’s story consistent.) In short, Tim is as much able to kill Grandfather as anyone ever is to kill anyone. Suppose that down the street another sniper, Tom, lurks waiting for another victim, Grandfather’s partner. Tom is not a time traveler, but otherwise he is just like Tim: same make of rifle, same murderous intent, same everything. We can even suppose that Tom, like Tim, believes himself to be a time traveler. Someone has gone to a lot of trouble to deceive Tom into thinking so. There’s no doubt that Tom can kill his victim; and Tim has everything going for him that Tom does. By any ordinary standards of ability, Tim can kill Grandfather. Tim cannot kill Grandfather. Grandfather lived, so to kill him would be to change the past. But the events of a past moment are not subdivisible into temporal parts and therefore cannot change. Either the events of 1921 timelessly do include Tim’s killing of Grandfather, or else they timelessly don’t. We may be tempted to speak of the “original” 1921 that lies in Tim’s personal past, many years before his birth, in which Grandfather lived; and of the “new” 1921 in which Tim now finds himself waiting in ambush to kill Grandfather. But if we do speak so, we merely confer two names on one thing. The events of 1921 are doubly located in Tim’s (extended) personal time, like the trestle on the railway, but the “original” 1921 and the “new” 1921 are one and the same.

If Tim did not kill Grandfather in the “original” 1921, then if he does kill Grandfather in the “new” 1921, he must both kill and not kill Grandfather in 1921—in the one and only 1921, which is both the “new” and the “original” 1921. It is logically impossible that Tim should change the past by killing Grandfather in 1921. So Tim cannot kill Grandfather. Not that past moments are special; no more can anyone change the present or the future. Present and future momentary events no more have temporal parts than past ones do. You cannot change a present or future event from what it was originally to what it is after you change it. What you can do is to change the present or the future from the unactualized way they would have been without some action of yours to the way they actually are. But that is not an actual change: not a difference between two successive actualities. And Tim can certainly do as much; he changes the past from the unactualized way it would have been without him to the one and only way it actually is. To “change” the past in this way, Tim need not do anything momentous; it is enough just to be there, however unobtrusively. You know, of course, roughly how the story of Tim must go on if it is to be consistent: he somehow fails. Since Tim didn’t kill Grandfather in the “original” 1921, consistency demands that neither does he kill Grandfather in the “new” 1921. Why not? For some commonplace reason.

Perhaps some noise distracts him at the last moment, perhaps he misses despite all his target practice, perhaps his nerve fails, perhaps he even feels a pang of unaccustomed mercy. His failure by no means proves that he was not really able to kill Grandfather. We often try and fail to do what we are able to do. Success at some tasks requires not only ability but also luck, and lack of luck is not a temporary lack of ability. Suppose our other sniper, Tom, fails to kill Grandfather’s partner for the same reason, whatever it is, that Tim fails to kill Grandfather. It does not follow that Tom was unable to. No more does it follow in Tim’s case that he was unable to do what he did not succeed in doing. We have this seeming contradiction: “Tim doesn’t, but can, because he has what it takes” versus “Tim doesn’t, and can’t, because it’s logically impossible to change the past.” I reply that there is no contradiction. Both conclusions are true, and for the reasons given. They are compatible because “can” is equivocal. To say that something can happen means that its happening is compossible with certain facts. Which facts? That is determined, but sometimes not determined well enough, by context. An ape can’t speak a human language— say, Finnish—but I can. Facts about the anatomy and operation of the ape’s larynx and nervous system are not compossible with his speaking Finnish.

The corresponding facts about my larynx and nervous system are compossible with my speaking Finnish. But don’t take me along to Helsinki as your interpreter: I can’t speak Finnish. My speaking Finnish is compossible with the facts considered so far, but not with further facts about my lack of training. What I can do, relative to one set of facts, I cannot do, relative to another, more inclusive, set. Whenever the context leaves it open which facts are to count as relevant, it is possible to equivocate about whether I can speak Finnish. It is likewise possible to equivocate about whether it is possible for me to speak Finnish, or whether I am able to, or whether I have the ability or capacity or power or potentiality to. Our many words for much the same thing are little help since they do not seem to correspond to different fixed delineations of the relevant facts.

Tim’s killing Grandfather that day in 1921 is compossible with a fairly rich set of facts: the facts about his rifle, his skill and training, the unobstructed line of fire, the locked door and the absence of any chaperone to defend the past, and so on. Indeed it is compossible with all the facts of the sorts we would ordinarily count as relevant is saying what someone can do. It is compossible with all the facts corresponding to those we deem relevant in Tom’s case. Relative to these facts, Tim can kill Grandfather. But his killing Grandfather is not compossible with another, more inclusive set of facts. There is the simple fact that Grandfather was not killed. Also there are various other facts about Grandfather’s doings after 1921 and their effects: Grandfather begat Father in 1922 and Father begat Tim in 1949. Relative to these facts, Tim cannot kill Grandfather. He can and he can’t, but under different delineations of the relevant facts. You can reasonably choose the narrower delineation, and say that he can; or the wider delineation, and say that he can’t. But choose. What you mustn’t do is waver, say in the same breath that he both can and can’t, and then claim that this contradiction proves that time travel is impossible. Exactly the same goes for Tom’s parallel failure.

For Tom to kill Grandfather’s partner also is compossible with all facts of the sorts we ordinarily count as relevant, but not compossible with a larger set including, for instance, the fact that the intended victim lived until 1934. In Tom’s case we are not puzzled. We say without hesitation that he can do it, because we see at once that the facts that are not compossible with his success are facts about the future of the time in question and therefore not the sort of facts we count as relevant in saying what Tom can do. In Tim’s case it is harder to keep track of which facts are relevant. We are accustomed to exclude facts about the future of the time in question, but to include some facts about its past. Our standards do not apply unequivocally to the crucial facts in this special case: Tim’s failure, Grandfather’s survival, and his subsequent doings. If we have foremost in mind that they lie in the external future of that moment in 1921 when Tim is almost ready to shoot, then we exclude them just as we exclude the parallel facts in Tom’s case. But if we have foremost in mind that they precede that moment in Tim’s extended personal time, then we tend to include them. To make the latter be foremost in your mind, I chose to tell Tim’s story in the order of his personal time, rather than in the order of external time. The fact of Grandfather’s survival until 1957 had already been told before I got to the part of the story about Tim lurking in ambush to kill him in 1921. We must decide, if we can, whether to treat these personally past and externally future facts as if they were straightforwardly past or as if they were straightforwardly future.

Fatalists—the best of them—are philosophers who take facts we count as irrelevant in saying what someone can do, disguise them somehow as facts of a different sort that we count as relevant, and thereby argue that we can do less than we think—indeed, that there is nothing at all that we don’t do but can. I am not going to vote Republican next fall. The fatalist argues that, strange to say, I not only won’t but can’t; for my voting Republican is not compossible with the fact that it was true already in the year 1548 that I was not going to vote Republican 428 years later. My rejoinder is that this is a fact, sure enough; however, it is an irrelevant fact about the future masquerading as a relevant fact about the past, and so should be left out of account in saying what, in any ordinary sense, I can do. We are unlikely to be fooled by the fatalist’s methods of disguise in this case, or other ordinary cases. But in cases of time travel, precognition, or the like, we’re on less familiar ground, so it may take less of a disguise to fool us. Also, new methods of disguise are available, thanks to the device of personal time.

Here’s another bit of fatalist trickery. Tim, as he lurks, already knows that he will fail. At least he has the wherewithal to know it if he thinks, he knows it implicitly. For he remembers that Grandfather was alive when he was a boy, he knows that those who are killed are thereafter not alive, he knows (let us suppose) that he is a time traveler who has reached the same 1921 that lies in his personal past, and he ought to understand—as we do— why a time traveler cannot change the past. What is known cannot be false. So his success is not only not compossible with facts that belong to the external future and his personal past, but also is not compossible with the present fact of his knowledge that he will fail. I reply that the fact of his foreknowledge, at the moment while he waits to shoot, is not a fact entirely about that moment. It may be divided into two parts. There is the fact that he then believes (perhaps only implicitly) that he will fail; and there is the further fact that his belief is correct, and correct not at all by accident, and hence qualifies as an item of knowledge. It is only the latter fact that is not compossible with his success, but it is only the former that is entirely about the moment in question. In calling Tim’s state at that moment knowledge, not just belief, facts about personally earlier but externally later moments were smuggled into consideration. I have argued that Tim’s case and Tom’s are alike, except that in Tim’s case we are more tempted than usual— and with reason—to opt for a semi-fatalist mode of speech. But perhaps they differ in another way. In Tom’s case, we can expect a perfectly consistent answer to the counterfactual question: what if Tom had killed Grandfather’s partner? Tim’s case is more difficult. If Tim had killed Grandfather, it seems offhand that contradictions would have been true. The killing both would and wouldn’t have occurred. No Grandfather, no Father; no Father, no Tim; no Tim, no killing. And for good measure: no Grandfather, no family fortune; no fortune, no time machine; no time machine, no killing. So the supposition that Tim killed Grandfather seems impossible in more than the semi-fatalistic sense already granted. If you suppose Tim to kill Grandfather and hold all the rest of his story fixed, of course you get a contradiction. But likewise if you suppose Tom to kill Grandfather’s partner and hold the rest of his story fixed—including the part that told of his failure—you get a contradiction. If you make any counterfactual supposition and hold all else fixed you get a contradiction. The thing to do is rather to make the counterfactual supposition and hold all else as close to fixed as you consistently can. That procedure will yield perfectly consistent answers to the question: what if Tim had not killed Grandfather?

In that case, some of the story I told would not have been true. Perhaps Tim might have been the time-traveling grandson of someone else. Perhaps he might have been the grandson of a man killed in 1921 and miraculously resurrected. Perhaps he might have been not a time traveler at all, but rather someone created out of nothing in 1920 equipped with false memories of a personal past that never was. It is hard to say what is the least revision of Tim’s story to make it true that Tim kills Grandfather, but certainly the contradictory story in which the killing both does and doesn’t occur is not the least revision. Hence it is false (according to the unrevised story) that if Tim had killed Grandfather then contradictions would have been true. What difference would it make if Tim travels in branching time?

Suppose that at the possible world of Tim’s story the space-time manifold branches; the branches are separated not in time, and not in space, but in some other way. Tim travels not only in time but also from one branch to another. In one branch Tim is absent from the events of 1921; Grandfather lives; Tim is born, grows up, and vanishes in his time machine. The other branch diverges from the first when Tim turns up in 1920; there Tim kills Grandfather and Grandfather leaves no descendants and no fortune; the events of the two branches differ more and more from that time on. Certainly this is a consistent story; it is a story in which Grandfather both is and isn’t killed in 1921 (in the different branches); and it is a story in which Tim, by killing Grandfather, succeeds in preventing his own birth (in one of the branches). But it is not a story in which Tim’s killing of Grandfather both does occur and doesn’t: it simply does, though it is located in one branch and not the other. And it is not a story in which Tim changes the past. 1921 and later years contain the events of both branches, coexisting somehow without interaction. It remains true at all the personal times of Tim’s life, even after the killing, that Grandfather lives in one branch and dies in the other.

[Credit: David Lewis]
[PDF version of article: Time Travel Paradoxes by David Lewis]

Black Holes Serving as Particle Accelerator

A particle collision at the RHIC. No strangele...

Image via Wikipedia

Black Hole Particle Accelerator!! Sounds so strange !! Well, this is not that much strange as it may purport. Particle accelerators are devices which are generally used to raise the particles at very high energy levels.

Beams of high-energy particles are useful for both fundamental and applied research in the sciences, and also in many technical and industrial fields unrelated to fundamental research. It has been estimated that there are approximately 26,000 accelerators worldwide. Of these, only about 1% are the research machines with energies above 1 GeV (that are the main focus of this article), about 44% are for radiotherapy, about 41% for ion implantation, about 9% for industrial processing and research, and about 4% for biomedical and other low-energy research.

For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as essentially 2-body interactions of the quarks and gluons of which they are composed. Thus elementary particle physicists tend to use machines creating beams of electrons, positrons, protons, and anti-protons, interacting with each other or with the simplest nuclei (e.g., hydrogen or deuterium) at the highest possible energies, generally hundreds of GeV or more. Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure, interactions, and properties of the nuclei themselves, and of condensed matter at extremely high temperatures and densities, such as might have occurred in the first moments of the Big Bang. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon.

[Image Details: A typical Cyclotron]

In current we accelerate particles at high energy levels by increasing the kinetic  energy of a particle and applying a very high electromagnetic field. Particles are accelerated according to Lorentz Force. However there are some limitations of such particle accelerators like we can’t accelerate them at very high energy levels. It needs a lot of  distance to be covered up before acquiring a desired speed.

This can be simply accumulated from the astounding details of LHC. The precise circumference of the LHC accelerator is 26 659 m, with a total of 9300 magnets inside. Not only is the LHC the world’s largest particle accelerator, just one-eighth of its cryogenic distribution system would qualify as the world’s largest fridge. It can accelerate particles upto the energy level of 14.0 TeV.

As it is pretty obvious that to accelerate particles above this energy level it would become almost imposssible.

An advanced civilization with the development level of typeIII or type IV would more likely choose to implement black holes rather than engineering a LHC or Tevatron at astrophysical scale. Kaluza Klein black holes are excellent for this purpose. Kaluza Klein black holes are very similar to Kerr black holes except they are charged.

Kerr Black Holes

Kerr spacetime is the unique explicitly defined model of the gravitational field of a rotating star. The spacetime is fully revealed only when the star collapses, leaving a black hole — otherwise the bulk of the star blocks exploration. The qualitative character of Kerr spacetime depends on its mass and its rate of rotation, the most interesting case being when the rotation is slow. (If the rotation stops completely, Kerr spacetime reduces to Schwarzschild spacetime.)

The existence of black holes in our universe is generally accepted — by now it would be hard for astronomers to run the universe without them. Everyone knows that no light can escape from a black hole, but convincing evidence for their existence is provided their effect on their visible neighbors, as when an observable star behaves like one of a binary pair but no companion is visible.

Suppose that, travelling our spacecraft, we approach an isolated, slowly rotating black hole. It can then be observed as a black disk against the stars of the background sky. Explorers familiar with the Schwarzschild black holes will refuse to cross its boundary horizon. First of all, return trips through a horizon are never possible, and in the Schwarzschild case, there is a more immediate objection: after the passage, any material object will, in a fraction of a second, be devoured by a singularity in spacetime.

If we dare to penetrate the horizon of this Kerr black hole we will find … another horizon. Behind this, the singularity in spacetime now appears, not as a central focus, but as a ring — a circle of infinite gravitational forces. Fortunately, this ring singularity is not quite as dangerous as the Schwarzschild one — it is possible to avoid it and enter a new region of spacetime, by passing through either of two “throats” bounded by the ring (see The Big Picture).

In the new region, escape from the ring singularity is easy because the gravitational effect of the black hole is reversed — it now repels rather than attracts. As distance increases, this negative gravity weakens, just as on the positive side, until its effect becomes negligible.

A quick departure may be prudent, but will prevent discovery of something strange: the ring singularity is the outer equator of a spatial solid torus that is, quite simply, a time machine. Travelling within it, one can reach arbitrarily far back into the past of any entity inside the double horizons. In principle you can arrange a bridge game, with all four players being you yourself, at different ages. But there is no way to meet Julius Caesar or your (predeparture) childhood self since these lie on the other side of two impassable horizons.

This rough description is reasonably accurate within its limits, but its apparent completeness is deceptive. Kerr spacetime is vaster — and more symmetrical. Outside the horizons, it turns out that the model described above lacks a distant past, and, on the negative gravity side, a distant future. Harder to imagine are the deficiencies of the spacetime region between the two horizons. This region definitely does not resemble the Newtonian 3-spacebetween two bounding spheres, furnished with a clock to tell time. In it, space and time are turbulently mixed. Pebbles dropped experimentally there can simply vanish in finite time — and new objects can magically appear.

Recently, it was made an interesting observation that black holes can accelerate particles up to unlimited energies Ecm in the centre of mass frame. These results have been obtained for the Kerr metric (they were also extended to the extremal Kerr-Newman one. It was demonstrated that the effect in question exists in a generic black hole background (so a black hole can be surrounded by matter) provided a black hole is rotating. Thus, rotation seemed to be an essential part of the effect. It is also necessary that one of colliding particles have the angular momentum L1 = E1/ωH  where E is the energy, ωH is the angular velocity of a generic rotating black hole. If  ωH→0, L1 →1, so for any particles  with finite L the effect becomes impossible. Say, in the Schwarzschild space-time, the ratio Ecm/m (m is the mass of particles) is finite and cannot exceed 2√5 for particles coming from infinity.

Meanwhile, sometimes the role played by the angular momentum and rotation, is effectively modeled by the electric charge and potential in the spherically-symmetric space-times. So, one may ask the question: can we achieve the infinite acceleration without rotation, simply due to the presence of the electric charge? Apart from interest on its own., the positive answer would be also important in that spherically-symmetric space-times are usually much simpler and admit much more detailed investigation, mimicking relevant features of rotating space-times. In a research paper by Oleg B. Zaslavskii, they showed that centre of mass energy can reach reach up to very high level and may gain almost infinite centre of mass energy before collision. Following the analysis and energy equations ,the answer is ‘Yes!’ .
The similar conclusion were also extracted by Pu Zion Mao in a research paper ‘Kaluza-Klein Balck holes Serving as Particle Acclerator.
Consider two mass particles are falling into a black hole having angular momentum of L1 and L2.

Obviously, plot r  and centre of mass energy near horizon of Kaluza Klein Black Hole in Fig.1 and Fig.2, from which we can see that there exists a critical
angular momentum Lc = 2μ/√(1-ν²)  for the geodesics of particle to reach the horizon. If L > Lc, the geodesics never reach the horizon. On the other hand, if the angular momentum is too small, the particle will fall into the black hole and the CM energy for the collision is limited. However, when L1 or L2 takes the angular momentum L = 2μ/√(1-ν²) , the CM energy is unlimited with no restrictions on the angular momentum per unit mass J/M of the black hole.
Now, it seems very mesmerizing that advanced alien civilization would more likely prefer to implement black holes as particle accelerator. However, that implementation could be moderated.
%d bloggers like this: