Interstellar Transportation: How?[Part-I]

This image illustrates Robert L. Forward's sch...

Image via Wikipedia

Interstellar travel is seemingly impossible within our life time yet we have various technological tactics which could be implemented to it happening before our eyes. Today I stumbled across an excellent paper which explained the intriguing plans in a concise way.

By Dana G. Andrews

Interstellar travel is difficult, but not impossible. The technology to launch slow Interstellar exploration missions, total delta velocities (ΔVs) of a few hundreds of kilometers per second, has been demonstrated in laboratories. However, slow interstellar probes will probably never be launched because no current organization would ever start a project which has no return for thousands of years; especially if it can wait a few dozens of years for improved technology and get the results quicker. One answer to the famous Fermi paradox is that no civilization ever launches colony ships because the colonists are always waiting for faster transportation!

Therefore, the first criteria for a successful interstellar mission is that it must returnresults within the lifetime of the principal investigator, or the average colonist. This is very difficult, but still possible. To obtain results this quick, the probe must beaccelerated to a significant fraction of the speed of light, with resultant kinetic energies of the order of 4 x10^15 joules per kilogram.Not surprisingly, the second criteria for as successful interstellar mission is cost effective energy generation and an efficient means of converting raw energy into directed momentum. In this paper, severalcandidate propulsion systems theoretically capable of delivering probes to nearby starsystems twenty-five to thirty-five years afterlaunch are defined and sized for prospective missions using both current and near termtechnologies.Rockets have limited ΔV capability because they must carry their entire source of energy and propellant. Therefore, they can’t be a probable candidates for interstellar travel. Now one might propose that why not use antimatter rockets? Well, they can’t because in current, we have no mechanism how to quarantine it?

Light Sails

Laser-driven Lightsails are not rockets since the power source remains behind and no propellants are expended. Therefore, the rocket equation doesn’t apply and extremely high ΔVs are possible if adequate laser power canbe focused on the lightsail for a sufficient acceleration time period. The acceleration,Asc , of   a laser-propelled lightsail spacecraft in meters per second is:
Asc = 2PL / Msc
where PL is the laser power impinging on the sail in watts, Ms is the mass of the spacecraft (sail and payload) in kilograms,and c is the speed of light in meters/ second. In practical units, a perfectly reflecting laser lightsail will experience a force of 6.7 newtons for every gigawatt of incident laser power. Herein lies the problem, since extremely high power levels are required to accelerate even small probes at a few gravities.The late Dr. Robert Forward in hispapers on interstellar lightsail missions postulated a 7,200-gigawatt laser to accelerate his 785 ton unmanned probe and a 75,000,000-gigawatt laser to accelerate his 78,500 ton manned vehicle. To achieve velocities of 0.21 c and 0.5 c, respectively,the laser beam must be focused on the sail for literally years at distances out to a couple of light years. In addition, the laser beamwas to be used to decelerate the payload atthe target star by staging the lightsail and using the outer annular portion as a mirror to reflect and direct most of the laser beam back onto the central portion of the lightsail,which does the decelerating. To enable this optical performance, a one thousand kilometer diameter Fresnel lens would be placed fifteen Astronomical Units (AU)beyond the laser and its position relative to the stabilized laser beam axis maintained to within a meter. If the laser beam axis is not stable over hours relative to the fixed background stars (drift <10-12 radians), or if the lens is not maintained within a fraction of a meter of the laser axis; the beam at the spacecraft will wander across the sail fast enough to destabilize the system. While this scenario is not physically impossible, it appears difficult enough to delay any serious consideration of using the large lens/long focus approach to laser-propelled light sails. The alternative approach is to build really large solar-pumped or electrically powered lasers in the million gigawatt range, where we could accelerate a decent size spacecraft to thirty percent the speed of light within a fraction of a light year using more achievable optics (e.g., a reflector 50 kilometers in diameter). Even though space construction projects of this magnitude must be termed highly speculative, the technology required is well understood and LPL systems utilizing dielectric quarter wave Lightsails could accelerate at twenty to thirty meters per second or more.

Laser Propelled Light Sail

The Magsail current loop carries no current during the laser boost and is just a rotating coil of superconducting cable acting as ballast to balance the thrust forces on the dielectric quarter wave reflector. After coast when the spacecraft approaches the target star system the lightsail is jettisoned and the Magsail is allowed to uncoil to its full diameter (80 km for a 2000 kg probe mission). It is then energized either from a onboard reactor or laser illuminated photovoltaic panels and begins its long deceleration. Example interstellar missions have been simulated using state-of-the-art optics designs and the resulting LPL design  characteristics are shown in Table below.
A constant beam power is chosen such that the spacecraft reaches the desired velocity just at the limit of acceleration with fiftykilometer diameter optics. Even though the high-powered LPL appears to meet all mission requirements, this paper explores alternative propulsion systems with potential for significant reductions in power, size, cost, and complexity.

These data show that interstellar exploration is feasible, even with near term technologies, if the right system is selected and enough resources are available. Therefore, once the technology for low cost access to space is available, the primary risk to any organization embarking on a serious effort to develop interstellar exploration/transportation is affordability, not technical feasibility.. The primary issue with respect to any of these systems actually being built is cost, both development cost and operating cost in the price of energy. As for manned exploration, there is a good idea, ride on asteroids- colonize them and go ready for interstellar mission!!

Hyperluminal Travel Without Exotic Matter

Listen terrestrial intelligent species: WeirdSciences is going to delve into a new idea of making interstellar travel feasible and this time no negative energy is to be used to propel spacecraft. Though implementing  negative energy to make warp drive is not that bad, but you need to refresh your mind.

By Eric Baird

Alcubierre’s 1994 paper on hyperfast travel has generated fresh interest in the subject of warp drives but work on the subject of hyper-fast travel is often hampered by confusion over definitions — how do we define times and speeds over extended regions of spacetime where the geometry is not Euclidean? Faced with this problem it may seem natural to define a spaceship’s travel times according to round-trip observations made from the traveller’s point of origin, but this “round-trip” approach normally requires signals to travel in two opposing spatial directions through the same metric, and only gives us an unambiguous reading of apparent shortened journey-times if the signal speeds are enhanced in both directions along the signal path, a condition that seems to require a negative energy density in the region. Since hyper-fast travel only requires that the speed of light-signals be enhanced in the actual direction of travel, we argue that the precondition of bidirectionality (inherited from special relativity, and the apparent source of the negative energy requirement), is unnecessary, and perhaps misleading.

When considering warp-drive problems, it is useful to remind ourselves of what it is that we are trying to accomplish. To achieve hyper-fast package delivery between two physical markers, A (the point of origin)  and B (the destination), we require that a package moved from A to B:

a) . . . leaves A at an agreed time according to clocks at A,
b) . . . arrives at B as early as possible according to clocks at B, and, ideally,
c) . . . measures their own journey time to be as short as possible.

From a purely practical standpoint as “Superluminal Couriers Inc.”, we do not care how long the arrival event takes to be seen back at A, nor do we care whether the clocks at A and B appear to be properly synchronised during the delivery process. Our only task is to take a payload from A at a specified “A time” and deliver it to B at possible “B time”, preferably without the package ageing excessively en route. If we can collect the necessary local time-stamps on our delivery docket at the various stages of the journey, we have achieved our objective and can expect payment from our customer.

Existing approaches tend to add a fourth condition:

d) . . . that the arrival-event at B is seen to occur as soon as possible by an observer back at A.

This last condition is much more difficult to meet, but is arguably more important to our ability to define distant time-intervals than to the actual physical delivery process itself. It does not dictate which events may be intersected by the worldline of the travelling object, but can affect the coordinate labels that we choose to assign to those events using special relativity.

  • Who Asked Your Opinion?

If we introduce an appropriate anisotropy in the speed of light to the region occupied by our delivery path, a package can travel to its destination along the path faster than “nominal background lightspeed” without exceeding the local speed of light along the path. This allows us to meet conditions 2(a) and 2(b), but the same isotropy causes an increased transit time for signals returning from B to A, so the “fast” outward journey can appear to take longer when viewed from A.

This behaviour can be illustrated by the extreme example of a package being delivered to the interior of a black hole from a “normal” region of spacetime. When an object falls through a gravitational event horizon, current theory allows its supposed inward velocity to exceed the nominal speed of light in the external environment, and to actually tend towards vINWARDS=¥ as the object approaches a black hole’s central singularity. But the exterior observer, A, could argue that the delivery is not only proceeding more slowly than usual, but that the apparent delivery time is actually infinite, since the package is never actually seen (by A) to pass through the horizon.

Should A’s low perception of the speed of the infalling object indicate that hyperfast travel has not been achieved? In the author’s opinion, it should not — if the package has successfully been collected from A and delivered to B with the appropriate local timestamps indicating hyperfast travel, then A’s subsequent opinion on how long the delivery is seen to take (an observation affected by the properties of light in a direction other than that of the travelling package) would not seem to be of secondary importance. In our “black hole” example, exotic matter or negative energy densities are not required unless we demand that an external observer should be able to see the package proceeding superluminally, in which case a negative gravitational effect would allow signals to pass back outwards through the r=2M surface to the observer at the despatch-point (without this return path, special relativity will tend to define the time of the unseen delivery event as being more-than-infinitely far into A’s future).

Negative energy density is required here only for appearances sake (and to make it easier for us to define the range of possible arrival-events that would imply that hyperfast travel has occurred), not for physical package delivery.

  • Hyperfast Return Journeys and Verification

It is all very well to be able to reach our destination in a small local time period, and to claim that we have travelled there at hyperfast speeds, but how do we convince others that our own short transit-time measurements are not simply due to time-dilation effects or to an “incorrect” concept of simultaneity? To convince observers at our destination, we only have to ask that they study their records for the observed behaviour of our point of origin — if the warpfield is applied to the entire journey-path (“Krasnikov tube configuration”), then the introduction and removal of the field will be associated with an increase and decrease in the rate at which signals from A arrive at B along the path (and will force special relativity to redefine the supposed simultaneity of events at B and A). If the warpfield only applies in the vicinity of the travelling package, other odd effects will be seen when the leading edge of the warpfield reaches the observer at B (the logical problems associated with the conflictng “lightspeeds” at the leading edge of a travelling warpfield wavefront been highlighted by Low, and will be discussed in a further paper). Our initial definitions of the distances involved should of course be based on measurements taken outside the region of spacetime occupied by the warpfield.

A more convincing way of demonstrating hyper-fast travel would be to send a package from A to B and back again in a shorter period of “A-time” than would normally be required for a round-trip light-beam. We must be careful here not to let our initial mathematical definitions get in the way of our task — although we have supposed that the speed of light towards B was slower while our warpdrive was operating on the outward journey, this artificially-reduced return speed does not have to also apply during our subsequent return trip, since we have the option of simply switching the warpdrive off, or better still, reversing its polarity for the journey home.

Although a single path allowing enhanced signal speeds in both directions at the same time would seem to require a negative energy-density, this feature is not necessary for a hyper-fast round trip — the outward and return paths can be separated in time (with the region having different gravitational properties during the outward and return trips) or in space (with different routes being taken for the outward and return journeys).

  • Caveats and Qualifications

Special relativity is designed around the assumption of Euclidean space and the stipulation that lightspeed is assumed to be isotropic, and neither of these assumptions is reliable for regions of spacetime that contain gravitational fields.

If we have a genuine lightspeed anisotropy that allows an object to move hyper-quickly between A and B, special relativity can respond by using the round-trip characteristics of light along the transit path to redefine the simultaneity of events at both locations, so that the “early” arrival event at B is redefined far enough into A’s future to guarantee a description in which the object is moving at less than cBACKGROUND.
This retrospective redefinition of times easily leads to definitional inconsistencies in warpdrive problems — If a package is sent from A to B and back to A again, and each journey is “ultrafast” thanks to a convenient gravitational gradient for each trip, one could invoke special relativity to declare that each individual trip has a speed less than cBACKGROUND, and then take the ultrafast arrival time of the package back at A as evidence that some form of reverse time travel has occurred , when in fact the apparent negative time component is an artifact of our repeated redefinition of the simultaneity of worldlines at A and B. Since it has been known for some time that similar definitional breakdowns in distant simultaneity can occur when an observer simply alters speed (the “one gee times one lightyear” limit quoted in MTW ), these breakdowns should not be taken too seriously when they reappear in more complex “warpdrive” problems.

Olum’s suggested method for defining simultaneity and hyperfast travel (calibration via signals sent through neighbouring regions of effectively-flat spacetime) is not easily applied to our earlier black hole example, because of the lack of a reference-path that bypasses the gravitational gradient (unless we take a reference-path previous to the formation of the black hole), but warpdrive scenarios tend instead to involve higher-order gravitational effects (e.g. gradients caused by so-called “non-Newtonian” forces ), and in these situations the concept of “relative height” in a gravitational field is often route-dependent (the concept “downhill” becomes a “local” rather than a “global” property, and gravitational rankings become intransitive). For this class of problem, Olum’s approach would seem to be the preferred method.

  • What’s the conclusion?

In order to be able to cross interstellar distances at enhanced speeds, we only require that the speed of light is greater in the direction in which we want to travel, in the region that we are travelling through, at the particular time that we are travelling through it. Although negative energy-densities would seem to be needed to increase the speed of light in both directions along the same path at the same time, this additional condition is only required for any hyperfast travel to be “obvious” to an observer at the origin point, which is a stronger condition than merely requiring that packages be delivered arbitrarily quickly. Hyperfast return journeys would also seem to be legal (along a pair of spatially separated or time-separated paths), as long as the associated energy-requirement is “paid for” somehow. Breakdowns in transitive logic and in the definitions used by special relativity already occur with some existing “legal” gravitational situations, and their reappearance in warpdrive problems is not in itself proof that these problems are paradoxical.

Arguments against negative energy-densities do not rule out paths that allow gravity-assisted travel at speeds greater than cBACKGROUND, provided that we are careful not to apply the conventions of special relativity inappropriately. Such paths do occur readily under general relativity, although it has to be admitted that some of the more extreme examples have a tendency to lead to unpleasant regions (such as the interiors of black holes) that one would not normally want to visit.

[Ref: Miguel Alcubierre, “The warp drive: hyper-fast travel within general relativity,”  Class. Quantum Grav. 11 L73-L77 (1994), Michael Spzir, “Spacetime hypersurfing,”  American Scientist 82 422-423 (Sept/Oct 1994), Robert L. Forward, “Guidelines to Antigravity,”  American Journal of Physics 31 (3) 166-170 (1963). ]

Negative Energy And Interstellar Travel

Can a region of space contain less than nothing? Common sense would say no; the most one could do is remove all matter and radiation and be left with vacuum. But quantum physics has a proven ability to confound intuition, and this case is no exception. A region of space, it turns out, can contain less than nothing. Its energy per unit volume–the energy density–can be less than zero.

Needless to say, the implications are bizarre. According to Einstein’s theory of gravity, general relativity, the presence of matter and energy warps the geometric fabric of space and time. What we perceive as gravity is the space-time distortion produced by normal, positive energy or mass. But when negative energy or mass–so-called exotic matter–bends space-time, all sorts of amazing phenomena might become possible: traversable wormholes, which could act as tunnels to otherwise distant parts of the universe; warp drive, which would allow for faster-than-light travel; and time machines, which might permit journeys into the past. Negative energy could even be used to make perpetual-motion machines or to destroy black holes. A Star Trek episode could not ask for more.

For physicists, these ramifications set off alarm bells. The potential paradoxes of backward time travel–such as killing your grandfather before your father is conceived–have long been explored in science fiction, and the other consequences of exotic matter are also problematic. They raise a question of fundamental importance: Do the laws of physics that permit negative energy place any limits on its behavior?

We and others have discovered that nature imposes stringent constraints on the magnitude and duration of negative energy, which (unfortunately, some would say) appear to render the construction of wormholes and warp drives very unlikely.

Double Negative

Before proceeding further, we should draw the reader’s attention to what negative energy is not.

It should not be confused with antimatter, which has positive energy. When an electron and its antiparticle, a positron, collide, they annihilate. The end products are gamma rays, which carry positive energy. If antiparticles were composed of negative energy, such an interaction would result in a final energy of zero.

One should also not confuse negative energy with the energy associated with the cosmological constant, postulated in inflationary models of the universe. Such a constant represents negative pressure but positive energy.

The concept of negative energy is not pure fantasy; some of its effects have even been produced in the laboratory. They arise from Heisenberg’s uncertainty principle, which requires that the energy density of any electric, magnetic or other field fluctuate randomly. Even when the energy density is zero on average, as in a vacuum, it fluctuates. Thus, the quantum vacuum can never remain empty in the classical sense of the term; it is a roiling sea of “virtual” particles spontaneously popping in and out of existence [see “Exploiting Zero-Point Energy,” by Philip Yam; SCIENTIFIC AMERICAN, December 1997]. In quantum theory, the usual notion of zero energy corresponds to the vacuum with all these fluctuations.

So if one can somehow contrive to dampen the undulations, the vacuum will have less energy than it normally does–that is, less than zero energy.[See, Casimir Starcraft: Zero Point Energy]

  • Negative Energy

Space time distortion is common method proposed for hyperluminal travel. Such space-time contortions would enable another staple of science fiction as well: faster-than-light travel.Warp drive might appear to violate Einstein’s special theory of relativity. But special relativity says that you cannot outrun a light signal in a fair race in which you and the signal follow the same route. When space-time is warped, it might be possible to beat a light signal by taking a different route, a shortcut. The contraction of space-time in front of the bubble and the expansion behind it create such a shortcut.

One problem with Alcubierre’s original model that the interior of the warp bubble is causally disconnected from its forward edge. A starship captain on the inside cannot steer the bubble or turn it on or off; some external agency must set it up ahead of time. To get around this problem, Krasnikov proposed a “superluminal subway,” a tube of modified space-time (not the same as a wormhole) connecting Earth and a distant star. Within the tube, superluminal travel in one direction is possible. During the outbound journey at sublight speed, a spaceship crew would create such a tube. On the return journey, they could travel through it at warp speed. Like warp bubbles, the subway involves negative energy.

Negative energy is so strange that one might think it must violate some law of physics.

Before and after the creation of equal amounts of negative and positive energy in previously empty space, the total energy is zero, so the law of conservation of energy is obeyed. But there are many phenomena that conserve energy yet never occur in the real world. A broken glass does not reassemble itself, and heat does not spontaneously flow from a colder to a hotter body. Such effects are forbidden by the second law of thermodynamics.

This general principle states that the degree of disorder of a system–its entropy–cannot decrease on its own without an input of energy. Thus, a refrigerator, which pumps heat from its cold interior to the warmer outside room, requires an external power source. Similarly, the second law also forbids the complete conversion of heat into work.

Negative energy potentially conflicts with the second law. Imagine an exotic laser, which creates a steady outgoing beam of negative energy. Conservation of energy requires that a byproduct be a steady stream of positive energy. One could direct the negative energy beam off to some distant corner of the universe, while employing the positive energy to perform useful work. This seemingly inexhaustible energy supply could be used to make a perpetual-motion machine and thereby violate the second law. If the beam were directed at a glass of water, it could cool the water while using the extracted positive energy to power a small motor–providing a refrigerator with no need for external power. These problems arise not from the existence of negative energy per se but from the unrestricted separation of negative and positive energy.

Unfettered negative energy would also have profound consequences for black holes. When a black hole forms by the collapse of a dying star, general relativity predicts the formation of a singularity, a region where the gravitational field becomes infinitely strong. At this point, general relativity–and indeed all known laws of physics–are unable to say what happens next. This inability is a profound failure of the current mathematical description of nature. So long as the singularity is hidden within an event horizon, however, the damage is limited. The description of nature everywhere outside of the horizon is unaffected. For this reason, Roger Penrose of Oxford proposed the cosmic censorship hypothesis: there can be no naked singularities, which are unshielded by event horizons.

For special types of charged or rotating black holes– known as extreme black holes–even a small increase in charge or spin, or a decrease in mass, could in principle destroy the horizon and convert the hole into a naked singularity. Attempts to charge up or spin up these black holes using ordinary matter seem to fail for a variety of reasons. One might instead envision producing a decrease in mass by shining a beam of negative energy down the hole, without altering its charge or spin, thus subverting cosmic censorship. One might create such a beam, for example, using a moving mirror. In principle, it would require only a tiny amount of negative energy to produce a dramatic change in the state of an extreme black hole.

[Image Details: Pulses of negative energy are permitted by quantum theory but only under three conditions. First, the longer the pulse lasts, the weaker it must be (a, b). Second, a pulse of positive energy must follow. The magnitude of the positive pulse must exceed that of the initial negative one. Third, the longer the time interval between the two pulses, the larger the positive one must be – an effect known as quantum interest (c).]

Therefore, this might be the scenario in which negative energy is the most likely to produce macroscopic effects.

Fortunately (or not, depending on your point of view), although quantum theory allows the existence of negative energy, it also appears to place strong restrictions – known as quantum inequalities – on its magnitude and duration.The inequalities bear some resemblance to the uncertainty principle. They say that a beam of negative energy cannot be arbitrarily intense for an arbitrarily long time. The permissible magnitude of the negative energy is inversely related to its temporal or spatial extent. An intense pulse of negative energy can last for a short time; a weak pulse can last longer. Furthermore, an initial negative energy pulse must be followed by a larger pulse of positive energy.The larger the magnitude of the negative energy, the nearer must be its positive energy counterpart. These restrictions are independent of the details of how the negative energy is produced. One can think of negative energy as an energy loan. Just as a debt is negative money that has to be repaid, negative energy is an energy deficit.

In the Casimir effect, the negative energy density between the plates can persist indefinitely, but large negative energy densities require a very small plate separation. The magnitude of the negative energy density is inversely proportional to the fourth power of the plate separation. Just as a pulse with a very negative energy density is limited in time, very negative Casimir energy density must be confined between closely spaced plates. According to the quantum inequalities, the energy density in the gap can be made more negative than the Casimir value, but only temporarily. In effect, the more one tries to depress the energy density below the Casimir value, the shorter the time over which this situation can be maintained.

When applied to wormholes and warp drives, the quantum inequalities typically imply that such structures must either be limited to submicroscopic sizes, or if they are macroscopic the negative energy must be confined to incredibly thin bands. In 1996 we showed that a submicroscopic wormhole would have a throat radius of no more than about 10-32 meter. This is only slightly larger than the Planck length, 10-35 meter, the smallest distance that has definite meaning. We found that it is possible to have models of wormholes of macroscopic size but only at the price of confining the negative energy to an extremely thin band around the throat. For example, in one model a throat radius of 1 meter requires the negative energy to be a band no thicker than 10-21 meter, a millionth the size of a proton.

It is estimated that the negative energy required for this size of wormhole has a magnitude equivalent to the total energy generated by 10 billion stars in one year. The situation does not improve much for larger wormholes. For the same model, the maximum allowed thickness of the negative energy band is proportional to the cube root of the throat radius. Even if the throat radius is increased to a size of one light-year, the negative energy must still be confined to a region smaller than a proton radius, and the total amount required increases linearly with the throat size.

It seems that wormhole engineers face daunting problems. They must find a mechanism for confining large amounts of negative energy to extremely thin volumes. So-called cosmic strings, hypothesized in some cosmological theories, involve very large energy densities in long, narrow lines. But all known physically reasonable cosmic-string models have positive energy densities.

Warp drives are even more tightly constrained, as shown working with us. In Alcubierre’s model, a warp bubble traveling at 10 times lightspeed (warp factor 2, in the parlance of Star Trek: The Next Generation) must have a wall thickness of no more than 10-32 meter. A bubble large enough to enclose a starship 200 meters across would require a total amount of negative energy equal to 10 billion times the mass of the observable universe. Similar constraints apply to Krasnikov’s superluminal subway.

A modification of Alcubierre’s model was recently constructed by Chris Van Den Broeck of the Catholic University of Louvain in Belgium. It requires much less negative energy but places the starship in a curved space-time bottle whose neck is about 10-32 meter across, a difficult feat. These results would seem to make it rather unlikely that one could construct wormholes and warp drives using negative energy generated by quantum effects.

The quantum inequalities prevent violations of the second law. If one tries to use a pulse of negative energy to cool a hot object, it will be quickly followed by a larger pulse of positive energy, which reheats the object. A weak pulse of negative energy could remain separated from its positive counterpart for a longer time, but its effects would be indistinguishable from normal thermal fluctuations. Attempts to capture or split off negative energy from positive energy also appear to fail. One might intercept an energy beam, say, by using a box with a shutter. By closing the shutter, one might hope to trap a pulse of negative energy before the offsetting positive energy arrives. But the very act of closing the shutter creates an energy flux that cancels out the negative energy it was designed to trap.

A pulse of negative energy injected into a charged black hole might momentarily destroy the horizon, exposing the singularity within. But the pulse must be followed by a pulse of positive energy, which would convert the naked singularity back into a black hole – a scenario we have dubbed cosmic flashing. The best chance to observe cosmic flashing would be to maximize the time separation between the negative and positive energy, allowing the naked singularity to last as long as possible. But then the magnitude of the negative energy pulse would have to be very small, according to the quantum inequalities. The change in the mass of the black hole caused by the negative energy pulse will get washed out by the normal quantum fluctuations in the hole’s mass, which are a natural consequence of the uncertainty principle. The view of the naked singularity would thus be blurred, so a distant observer could not unambiguously verify that cosmic censorship had been violated.

Recently it was shown that the quantum inequalities lead to even stronger bounds on negative energy. The positive pulse that necessarily follows an initial negative pulse must do more than compensate for the negative pulse; it must overcompensate. The amount of overcompensation increases with the time interval between the pulses. Therefore, the negative and positive pulses can never be made to exactly cancel each other. The positive energy must always dominate–an effect known as quantum interest. If negative energy is thought of as an energy loan, the loan must be repaid with interest. The longer the loan period or the larger the loan amount, the greater is the interest. Furthermore, the larger the loan, the smaller is the maximum allowed loan period. Nature is a shrewd banker and always calls in its debts. The concept of negative energy touches on many areas of physics: gravitation, quantum theory, thermodynamics. The interweaving of so many different parts of physics illustrates the tight logical structure of the laws of nature. On the one hand, negative energy seems to be required to reconcile black holes with thermodynamics. On the other, quantum physics prevents unrestricted production of negative energy, which would violate the second law of thermodynamics. Whether these restrictions are also features of some deeper underlying theory, such as quantum gravity, remains to be seen. Nature no doubt has more surprises in store.

Key To Space Time Engineering: Huge Magnetic Field Created

Large magnetic fields above 150T were never produced before but this time scientists has successfully created the magnetic Field of 300T. Larger magnetic fields could be implemented into space time engineering, however this is not up to extent we need yet, provide a key to future space time engineering. Graphene, the extraordinary form of carbon that consists of a single layer of carbon atoms, has produced another in a long list of experimental surprises. In the current issue of the journal Science, a multi-institutional team of researchers headed by Michael Crommie, a faculty senior scientist in the Materials Sciences Division at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, reports the creation of pseudo-magnetic fields far stronger than the strongest magnetic fields ever sustained in a laboratory – just by putting the right kind of strain onto a patch of graphene.

“We have shown experimentally that when graphene is stretched to form nanobubbles on a platinum substrate, electrons behave as if they were subject to magnetic fields in excess of 300 tesla, even though no magnetic field has actually been applied,” says Crommie. “This is a completely new physical effect that has no counterpart in any other condensed matter system.”

Crommie notes that “for over 100 years people have been sticking materials into magnetic fields to see how the electrons behave, but it’s impossible to sustain tremendously strong magnetic fields in a laboratory setting.” The current record is 85 tesla for a field that lasts only thousandths of a second. When stronger fields are created, the magnets blow themselves apart.

The ability to make electrons behave as if they were in magnetic fields of 300 tesla or more – just by stretching graphene – offers a new window on a source of important applications and fundamental scientific discoveries going back over a century. This is made possible by graphene’s electronic behavior, which is unlike any other material’s.

[Image Details: In this scanning tunneling microscopy image of a graphene nanobubble, the hexagonal two-dimensional graphene crystal is seen distorted and stretched along three main axes. The strain creates pseudo-magnetic fields far stronger than any magnetic field ever produced in the laboratory. ]

A carbon atom has four valence electrons; in graphene (and in graphite, a stack of graphene layers), three electrons bond in a plane with their neighbors to form a strong hexagonal pattern, like chicken-wire. The fourth electron sticks up out of the plane and is free to hop from one atom to the next. The latter pi-bond electrons act as if they have no mass at all, like photons. They can move at almost one percent of the speed of light.

The idea that a deformation of graphene might lead to the appearance of a pseudo-magnetic field first arose even before graphene sheets had been isolated, in the context of carbon nanotubes (which are simply rolled-up graphene). In early 2010, theorist Francisco Guinea of the Institute of Materials Science of Madrid and his colleagues developed these ideas and predicted that if graphene could be stretched along its three main crystallographic directions, it would effectively act as though it were placed in a uniform magnetic field. This is because strain changes the bond lengths between atoms and affects the way electrons move between them. The pseudo-magnetic field would reveal itself through its effects on electron orbits.

In classical physics, electrons in a magnetic field travel in circles called cyclotron orbits. These were named following Ernest Lawrence’s invention of the cyclotron, because cyclotrons continuously accelerate charged particles (protons, in Lawrence’s case) in a curving path induced by a strong field.

Viewed quantum mechanically, however, cyclotron orbits become quantized and exhibit discrete energy levels. Called Landau levels, these correspond to energies where constructive interference occurs in an orbiting electron’s quantum wave function. The number of electrons occupying each Landau level depends on the strength of the field – the stronger the field, the more energy spacing between Landau levels, and the denser the electron states become at each level – which is a key feature of the predicted pseudo-magnetic fields in graphene.

A serendipitous discovery

In the patch of graphene inside the roughly circular indentation on a platinum substrate, four triangular nanobubbles appear at the edge of the patch and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows electron densities clustering at discrete Landau levels. Pseudo-magnetic fields are strongest at regions of greatest curvature.

[Image Details: A patch of graphene at the surface of a platinum substrate exhibits four triangular nanobubbles at its edges and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows local electron densities clustering in peaks at discrete Landau-level energies. Pseudo-magnetic fields are strongest at regions of greatest curvature.]

Describing their experimental discovery, Crommie says, “We had the benefit of a remarkable stroke of serendipity.”

Crommie’s research group had been using a scanning tunneling microscope to study graphene monolayers grown on a platinum substrate. A scanning tunneling microscope works by using a sharp needle probe that skims along the surface of a material to measure minute changes in electrical current, revealing the density of electron states at each point in the scan while building an image of the surface.

Crommie was meeting with a visiting theorist from Boston University, Antonio Castro Neto, about a completely different topic when a group member came into his office with the latest data. It showed nanobubbles, little pyramid-like protrusions, in a patch of graphene on the platinum surface and associated with the graphene nanobubbles there were distinct peaks in the density of electron states. Crommie says his visitor, Castro Neto, took one look and said, “That looks like the Landau levels predicted for strained graphene.”

Sure enough, close examination of the triangular bubbles revealed that their chicken-wire lattice had been stretched precisely along the three axes needed to induce the strain orientation that Guinea and his coworkers had predicted would give rise to pseudo-magnetic fields. The greater the curvature of the bubbles, the greater the strain, and the greater the strength of the pseudo-magnetic field. The increased density of electron states revealed by scanning tunneling spectroscopy corresponded to Landau levels, in some cases indicating giant pseudo-magnetic fields of 300 tesla or more.

“Getting the right strain resulted from a combination of factors,” Crommie says. “To grow graphene on the platinum we had exposed the platinum to ethylene” – a simple compound of carbon and hydrogen – “and at high temperature the carbon atoms formed a sheet of graphene whose orientation was determined by the platinum’s lattice structure.”

The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), the colors indicate height, not field strength, but the measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model. To get the highest resolution from the scanning tunneling microscope, the system was then cooled to a few degrees above absolute zero. Both the graphene and the platinum contracted – but the platinum shrank more, with the result that excess graphene pushed up into bubbles, measuring four to 10 nanometers (billionths of a meter) across and from a third to more than two nanometers high. To confirm that the experimental observations were consistent with theoretical predictions, Castro Neto worked with Guinea to model a nanobubble typical of those found by the Crommie group. The resulting theoretical picture was a near-match to what the experimenters had observed: a strain-induced pseudo-magnetic field some 200 to 400 tesla strong in the regions of greatest strain, for nanobubbles of the correct size.

[Image Details: The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), colors indicate height, not field strength, but measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model.]

“Controlling where electrons live and how they move is an essential feature of all electronic devices,” says Crommie. “New types of control allow us to create new devices, and so our demonstration of strain engineering in graphene provides an entirely new way for mechanically controlling electronic structure in graphene. The effect is so strong that we could do it at room temperature.”

The opportunities for basic science with strain engineering are also huge. For example, in strong pseudo-magnetic fields electrons orbit in tight circles that bump up against one another, potentially leading to novel electron-electron interactions. Says Crommie, “this is the kind of physics that physicists love to explore.”

“Strain-induced pseudo-magnetic fields greater than 300 tesla in graphene nanobubbles,” by Niv Levy, Sarah Burke, Kacey Meaker, Melissa Panlasigui, Alex Zettl, Francisco Guinea, Antonio Castro Neto, and Michael Crommie, appears in the July 30 issue of Science. The work was supported by the Department of Energy’s Office of Science and by the Office of Naval Research. I’ve contacted Crommy to provide more details of research. Hope, soon I’ll get a response from him,

[Source: News Center]

Wormhole Engineering

By John Gribbin

There is still one problem with wormholes for any hyperspace engineers to take careful account of. The simplest calculations suggest that whatever may be going on in the universe outside, the attempted passage of a spaceship through the hole ought to make the star gate slam shut. The problem is that an accelerating object, according to the general theory of relativity, generates those ripples in the fabric of spacetime itself known as gravitational waves. Gravitational radiation itself, travelling ahead of the spaceship and into the black hole at the speed of light, could be amplified to infinite energy as it approaches the singularity inside the black hole, warping spacetime around itself and shutting the door on the advancing spaceship. Even if a natural traversable wormhole exists, it seems to be unstable to the slightest perturbation, including the disturbance caused by any attempt to pass through it.

But Thorne’s team found an answer to that for Sagan. After all, the wormholes in Contact are definitely not natural, they are engineered. One of his characters explains:

There is an interior tunnel in the exact Kerr solution of the Einstein Field Equations, but it’s unstable. The slightest perturbation would seal it off and convert the tunnel into a physical singularity through which nothing can pass. I have tried to imagine a superior civilization that would control the internal structure of a collapsing star to keep the interior tunnel stable. This is very difficult. The civilization would have to monitor and stabilize the tunnel forever.

But the point is that the trick, although it may be very difficult, is not impossible. It could operate by a process known as negative feedback, in which any disturbance in the spacetime structure of the wormhole creates another disturbance which cancels out the first disturbance. This is the opposite of the familiar positive feedback effect, which leads to a howl from loudspeakers if a microphone that is plugged in to those speakers through an amplifier is placed in front of them. In that case, the noise from the speakers goes into the microphone, gets amplified, comes out of the speakers louder than it was before, gets amplified . . . and so on. Imagine, instead, that the noise coming out of the speakers and into the microphone is analysed by a computer that then produces a sound wave with exactly the opposite characteristics from a second speaker. The two waves would cancel out, producing total silence.

For simple sound waves, this trick can actually be carried out, here on Earth, in the 1990s. Cancelling out more complex noise, like the roar of a football crowd, is not yet possible, but might very well be in a few years time. So it may not be completely farfetched to imagine Sagan’s “superior civilization” building a gravitational wave receiver/transmitter system that sits in the throat of a wormhole and can record the disturbances caused by the passage of the spaceship through the wormhole, “playing back” a set of gravitational waves that will exactly cancel out the disturbance, before it can destroy the tunnel.

But where do the wormholes come from in the first place? The way Morris, Yurtsever and Thorne set about the problem posed by Sagan was the opposite of the way everyone before them had thought about black holes. Instead of considering some sort of known object in the Universe, like a dead massive star, or a quasar, and trying to work out what would happen to it, they started out by constructing the mathematical description of a geometry that described a traversable wormhole, and then used the equations of the general theory of relativity to work out what kinds of matter and energy would be associated with such a spacetime. What they found is almost (with hindsight) common sense. Gravity, an attractive force pulling matter together, tends to create singularities and to pinch off the throat of a wormhole. The equations said that in order for an artificial wormhole to be held open, its throat must be threaded by some form of matter, or some form of field, that exerts negative pressure, and has antigravity associated with it.

Now, you might think, remembering your school physics, that this completely rules out the possibility of constructing traversable wormholes. Negative pressure is not something we encounter in everyday life (imagine blowing negative pressure stuff in to a balloon and seeing the balloon deflate as a result). Surely exotic matter cannot exist in the real Universe? But you may be wrong.

Making  Antigravity

The key to antigravity was found by a Dutch physicist, Hendrik Casimir, as long ago as 1948. Casimir, who was born in The Hague in 1909, worked from 1942 onwards in the research laboratories of the electrical giant Philips, and it was while working there that he suggested what became known as the Casimir effect.

The simplest way to understand the Casimir effect is in terms of two parallel metal plates, placed very close together with nothing in between them (Figure 6). The quantum vacuum is not like the kind of “nothing” physicists imagined the vacuum to be before the quantum era. It seethes with activity, with particle-antiparticle pairs constantly being produced and annihilating one another. Among the particles popping in and out of existence in the quantum vacuum there will be many photons, the particles which carry the electromagnetic force, some of which are the particles of light. Indeed, it is particularly easy for the vacuum to produce virtual photons, partly because a photon is its own antiparticle, and partly because photons have no “rest mass” to worry about, so all the energy that has to be borrowed from quantum uncertainty is the energy of the wave associated with the particular photon. Photons with different energies are associated with electromagnetic waves of different wavelengths, with shorter wavelengths corresponding to greater energy; so another way to think of this electromagnetic aspect of the quantum vacuum is that empty space is filled with an ephemeral sea of electromagnetic waves, with all wavelengths represented.

This irreducible vacuum activity gives the vacuum an energy, but this energy is the same everywhere, and so it cannot be detected or used. Energy can only be used to do work, and thereby make its presence known, if there is a difference in energy from one place to another.

Between two electrically conducting plates, Casimir pointed out, electromagnetic waves would only be able to form certain stable patterns. Waves bouncing around between the two plates would behave like the waves on a plucked guitar string. Such a string can only vibrate in certain ways, to make certain notes — ones for which the vibrations of the string fit the length of the string in such a way that there are no vibrations at the fixed ends of the string. The allowed vibrations are the fundamental note for a particular length of string, and its harmonics, or overtones. In the same way, only certain wavelengths of radiation can fit into the gap between the two plates of a Casimir experiment . In particular, no photon corresponding to a wavelength greater than the separation between the plates can fit in to the gap. This means that some of the activity of the vacuum is suppressed in the gap between the plates, while the usual activity goes on outside. The result is that in each cubic centimetre of space there are fewer virtual photons bouncing around between the plates than there are outside, and so the plates feel a force pushing them together. It may sound bizarre, but it is real. Several experiments have been carried out to measure the strength of the Casimir force between two plates, using both flat and curved plates made of various kinds of material. The force has been measured for a range of plate gaps from 1.4 nanometers to 15 nanometers (one nanometer is one billionth of a metre) and exactly matches Casimir’s prediction.

In a paper they published in 1987, Morris and Thorne drew attention to such possibilities, and also pointed out that even a straightforward electric or magnetic field threading the wormhole “is right on the borderline of being exotic; if its tension were infinitesimally larger . . . it would satisfy our wormhole-building needs.” In the same paper, they concluded that “one should not blithely assume the impossibility of the exotic material that is required for the throat of a traversable wormhole.” The two CalTech researchers make the important point that most physicists suffer a failure of imagination when it comes to considering the equations that describe matter and energy under conditions far more extreme than those we encounter here on Earth. They highlight this by the example of a course for beginners in general relativity, taught at CalTech in the autumn of 1985, after the first phase of work stimulated by Sagan’s enquiry, but before any of this was common knowledge, even among relativists. The students involved were not taught anything specific about wormholes, but they were taught to explore the physical meaning of spacetime metrics. In their exam, they were set a question which led them, step by step, through the mathematical description of the metric corresponding to a wormhole. “It was startling,” said Morris and Thorne, “to see how hidebound were the students’ imaginations. Most could decipher detailed properties of the metric, but very few actually recognised that it represents a traversable wormhole connecting two different universes.”

For those with less hidebound imaginations, there are two remaining problems — to find a way to make a wormhole large enough for people (and spaceships) to travel through, and to keep the exotic matter out of contact with any such spacefarers. Any prospect of building such a device is far beyond our present capabilities. But, as Morris and Thorne stress, it is not impossible and “we correspondingly cannot now rule out traversable wormholes.” It seems to me that there’s an analogy here that sets the work of such dreamers as Thorne and Visser in a context that is both helpful and intriguing. Almost exactly 500 years ago, Leonardo da Vinci speculated about the possibility of flying machines. He designed both helicopters and aircraft with wings, and modern aeronautical engineers say that aircraft built to his designs probably could have flown if Leonardo had had modern engines with which to power them — even though there was no way in which any engineer of his time could have constructed a powered flying machine capable of carrying a human up into the air. Leonardo could not even dream about the possibilities of jet engines and routine passenger flights at supersonic speeds. Yet Concorde and the jumbo jets operate on the same basic physical principles as the flying machines he designed. In just half a millennium, all his wildest dreams have not only come true, but been surpassed. It might take even more than half a millennium for designs for a traversable wormhole to leave the drawing board; but the laws of physics say that it is possible — and as Sagan speculates, something like it may already have been done by a civilization more advanced than our own.

New Silicon Nanowires Could Make Photovoltaic Devices More Efficient

Future energy crisis is not a single problem alone, various challenges are before us which haven’t been solved yet. Well, this article will not cover future energy challenges but it would suggest a solution to future energy crisis based on recent research. Our whole energy problem would be solved if  we could optimize solar cells more efficiently. What we need as energy source, is ultimately electricity, no matter how you are getting it. Either get it from nuclear resources, water or solar cells. Well, solar cells could solve our problems more efficiently and solar cells are potential source of clean and renewable energy. Photovoltaic cell[PV Devices] are excellent as our future energy source.

Although early photovoltaic (PV) cells and modules were used in space and other off-grid applications where their value is high, currently about 70% of PV is grid- connected which imposes major cost pressures from conventional sources of electricity. Yet the potential benefits of its large-scale use are enormous and PV now appears to be meeting the challenge with annual growth rates above 30% for the past five years.

More than 90% of PV is currently made of Si modules assembled from small 4-12 inch crystalline or multicrystalline wafers which, like most electronics, can be individually tested before assembly into modules. However, the newer thin-film technologies are monolithically integrated devices approximately 1 m2 in size which cannot have even occasional shunts or weak diodes without ruining the manufacturing yield. Thus, these devices require the deposition of many thin semiconducting layers on glass, stainless steel or polymer and all layers must function well a square meter at a time or the device fails. This is the challenge of PV technology-high efficiency, high uniformity, and high yield over large areas to form devices that can operate with repeated temperature cycles from —40 C to 100 C with a provable twenty- year lifetime and a cost of less than a penny per square centimeter.

[Image Details: Typical construction of solar cell]

Solar cells work and they last. The first cell made at Bell Labs in 1954 is still functioning. Solar cells continue to play a major role in the success of space exploration—witness the Mars rovers. Today’s commercial solar panels, whether of crystalline Si, thin amorphous, or polycrystalline films, are typically guaranteed for 20 years—unheard of reliability for a consumer product. However, PV still accounts for less than 10-5 of total energy usage world-wide. In the US, electricity produced by PV costs upwards of $0.25/ kW-hr whereas the cost of electricity production by coal is less than $0.04 /kW-hr.

It seems fair to ask what limits the performance of solar modules and is there hope of ever reaching cost- competitive generation of PV electricity?

The photogeneration of a tiny amount of current was first observed by Adams and Day in 1877 in selenium. However, it was not until 1954 that Chapin, Fuller and Pearson at Bell Labs obtained significant power generation from a Si cell. Their first cells used a thick lithium-doped n-layer on p-Si, but efficiencies rose well above 5% with a very thin phosphorous-doped n-Si layer at the top.

The traditional Si solar cell is a homojunction device. The sketch of first image indicates the typical construction of the semiconductor part of a Si cell. It might have a p-type base with an acceptor (typically boron or aluminum) doping level of NA = 1 x 1015 cm-3 and a diffused n-type window/emitter layer with ND = 1 x 1020 cm-3 (typically phosphorus). The Fermi level of the n-type (p-type) side will be near the conduction (valence) band edge so that donor-released electrons will diffuse into the p-type side to occupy lower energy states there, until the exposed space charge (ionized donors in the n-type region, and ionized acceptors in the p-type) produces a field large enough to prevent further diffusion. Often a very heavily doped region is used at the back contact to produce a back surface field (BSF) that aids in hole collection and rejects electrons.

In the ideal case of constant doping densities in the two regions, the depletion width, W, is readily calculated from the Poisson equation, and lies mostly in the lightly doped p-type region. The electric field has its maximum at the metallurgical interface between the n- and p-type regions and typically reaches 104 V/cm or more. Such fields are extremely effective at separating the photogenerated electron-hole pairs. Silicon with its indirect gap has relatively weak light absorption requiring about 10-20μm of material to absorb most of the above-band-gap, near-infrared and red light. (Direct-gap materials such as GaAs, CdTe, Cu(InGa)Se2 and even a-Si:H need only ~0.5μm or less for full optical absorption.) The weak absorption in crystalline Si means that a significant fraction of the above-band-gap photons will generate carriers in the neutral region where the minority carrier lifetime must be very long to allow for long diffusion lengths. By contrast, carrier generation in the direct-gap semiconductors can be entirely in the depletion region where collection is through field-assisted drift.

A high quality Si cell might have a hole lifetime in the heavily doped n-type region of tp=1μs and corresponding diffusion length of Lp=12 μm, whereas, in the more lightly doped p-type region the minority carriers might have tn=350 μs and Ln=1100 μm. Often few carriers are collected from the heavily doped, n-type region so strongly-absorbed blue light does not generate much photocurrent. Usually the n-type emitter layer is therefore kept very thin. The long diffusion length of electrons in the p- type region is a consequence of the long electron lifetime due to low doping and of the higher mobility of electrons compared with holes. This is typical of most semiconductors so that the most common solar cell configuration is an “n-on-p” with the p-type semiconductor serving as the “absorber.”

[Image Details: Triple junction cell]

Current generated by an ideal, single-junction solar cell is the integral of the solar spectrum above the semiconductor band gap and the voltage is approximately 2/3 of the band gap. Note that any excess photon energy above the band edge will be lost as the kinetic energy of the electron and hole pair relaxes to thermal motion in picoseconds and simply heats the cell. Thus single-junction solar cell efficiency is limited to about 30% for band gaps near 1.5 eV. Si cells have reached 24%.

This limit can be exceeded by multijunction devices since the high photon energies are absorbed in wider-band-gap component cells. In fact the III-V materials, benefiting like Si from research for electronics have been very successful at achieving very high efficiency, reaching 35% with the monolithically stacked three-junction, two-terminal structure, GaInP/GaAs/Ge. These tandem devices must be critically engineered to have exactly matched current generation in each of the junctions and therefore are sensitive to changes in the solar spectrum during the day. A sketch of a triple-junction, two-terminal amorphous silicon cell is shown in second image.

Polycrystalline and amorphous thin-film cells use inexpensive glass, metal foil or polymer substrates to reduce cost. The polycrystalline thin film structures utilize direct-gap semiconductors for high absorption while amorphous Si capitalizes on the disorder to enhance absorption and hydrogen to passivate dangling bonds. It is quite amazing that these very defective thin-film materials can still yield high carrier collection efficiencies. Partly this comes from the field-assisted collection and partly from clever passivation of defects and manipulation of grain boundaries. In some cases we are just lucky that nature provides benign or even helpful grain boundaries in materials such as CdTe and Cu(InGa)Se2, although we seem not so fortunate with GaAs grain boundaries. It is now commonly accepted that, not only are grain boundaries effectively passivated during the thin-film growth process or by post-growth treatments, but also that grain boundaries can actually serve as collection channels for carriers. In fact it is not uncommon to find that the polycrystalline thin-film devices outperform their single-crystal counterpart.

In the past two decades there has been remarkable progress in the performance of small, laboratory cells. The common Si device has benefited from light trapping techniques, back-surface fields, and innovative contact designs, III-V multijunctions from high quality epitaxial techniques, a-Si:H from thin, multiple-junction designs that minimize the effects of dangling bonds (Stabler-Wronski effect), and polycrystalline thin films from innovations in low-cost growth methods and post-growth treatments.

However, these type of cells can’t be used for commercial production of electricity since such III-V hetrostructures are not cost efficient. Now as research paper proposed, silicon nanowires could be direct replacement of these exotic structures and using the same confinement principles(quantum dots) it may prove cost efficient as well. Well nanowires are direct bandgap material and we can alter the bandgap anywhere between 2-5volts and thus, providing wavelength selectivity between 200 to 600 nm( E=hc/λ).

What research paper considered is that nanowires oriented along [100] plane, in a square array with equilibrium lattice relaxed to a unit cell with Si-Si bond spacing to be 2.324A and Si-H spacing to be 1.5A. Since the nanowires are direct band gap semiconductors which make them excellent for optical absorptions. All the nanowires examined in this paper show features in the absorption spectrum the correspond to excitonic processes. The lowest excitonic peaks for each of the nanowires occur at 5.25 eV, (232 nm), 3.7 eV (335 nm), and 2.3 eV (539 nm) in the increasing order of size. The corresponding optical wavelength is shown in table adapted from paper. Absorption is tunable from the visible region to the near UV portion of the solar spectrum.

[Image Details: Click on the thumbnail to see the full sized image]

No doubt, such silicon nano wires are excellent and could decrease solar cell costs efficiently. See the research analysis.

These laboratory successes are now being translated into success in the manufacturing of large PV panel sizes in large quantities and with high yield. The worldwide PV market has been growing at 30% to 40% annually for the past five years due partly to market incentives in Japan and Germany, and more recently in some U.S. states.

Recent analyses of the energy payback time for solar systems show that today’s systems pay back the energy used in manufacturing in about 3.5 years for silicon and 2.5 years for thin films.

The decline of manufacturing costs follows nicely an 80% experience curve-costs dropping 20% for each doubling of cumulative production.The newly updated PV Roadmap (http://www.seia.org) envisions PV electricity costing $0.06/kW-hr by 2015 and half of new U.S. electricity generation to be produced by PV by 2025, if some modest and temporary nationwide incentives are enacted. Given the rate of progress in the laboratory and innovations on the production line, this ambitious goal might just be achievable. PV can then begin to play a significant role in reducing greenhouse gas emissions and in improving energy security. Some analysts see PV as the only energy resource that has the potential to supply enough clean power for a sustainable energy future for the world.

[Ref: APS and Optical Absorption Characteristics of Silicon Nanowires for Photovoltaic Applications by Vidur Prakash]

Hyperluminal Spaceship, Tachyons and Time Travel

Einsteins theory of relativity suggests that none can have hyperluminal speed. Negative mass or tachyon are the particles which always travel at superluminal speed and had a negative time frame means time run backward for them. What if a spaceship is travelling at hyper light speed? Would time run backward for that space ship? These are questions which are to be solved here by Earnst L Wall.

By Earnst L Wall

To depart somewhat from a pure state machine argument for a moment, we will consider a more general discussion of the argument that an object that moves faster than the speed of light would experience time reversal.  For example,  the space ship Enterprise, in moving away from Earth at hyperluminal velocities, would overtake the light that was emitted by events that occurred while it was still on the earth.  It would then see the events unfold in reverse time order as it progressed on its path.  This phenomena would be, in effect, a review of the record of a portion of the Earth=s history in the same manner that one views a sequence of events on a VCR as the tape is run backwards.  But this does not mean that the hyperluminal spacecraft or the universe is actually going backwards in time anymore than a viewer watching the VCR running in reverse is moving backwards in time.

Further, it must be asked what would happen to the universe itself under these circumstances.  To illustrate this, suppose a colony were established on Neptune.  Knowing the distance to Neptune, it would be trivial, even with today’s technology, to synchronize the clocks on Earth and Neptune so that they kept the same absolute time to within microseconds or better.  Next, suppose that the Enterprise left Earth at a hyperluminal velocity for a trip to Neptune.  When the crew and passengers of the Enterprise arrive at Neptune, say 3 minutes later in Earth time, it is unlikely that the clocks on Neptune would be particularly awed or even impressed by the arrival of the travelers. When the Enterprise arrives at Neptune, it would get there 3 minutes later in terms of the time as measured on both Neptune and Earth, regardless of how long its internal clocks indicated that the trip was.  Neither the Enterprise nor its passengers would have moved backwards in time as measured on earth or Neptune.

The hands of a clock inside the Enterprise, as simulated by a state machine, would not be compelled to reverse themselves just because it is moving at a hyperluminal velocity.  This is because the universal state machine is still increasing its time count, not reversing it.  Nor would any molecule that is not in, or near the trajectory of the space ship, be affected insofar as time is concerned, provided it does not actually collide with the space ship.

In the scheme above, reverse time travel will not occur merely because an object is traveling at hyperluminal velocities.  Depending on the details of the simulation, hyperluminal travel may cause the local time sequencing to slow down, but a simulated, aging movie queen who is traveling in a hyperluminal spacecraft will not regain her lost youth.  Simulated infants will not reenter their mother’s wombs.  Simulated dinosaurs will not be made to reappear.  A simulated hyperluminal spacecraft cannot go back in time retrieve objects and bring them back to the present.  Nor would any of the objects in the real universe go backward in time as a result of the passage of the hyperluminal spacecraft.

The mere hyperluminal transmission of information or signals from point to point, nor objects traveling at hyperluminal velocities from point to point, does not cause a  change in the direction of the time count at the point of departure nor at the point of arrival of these hyperluminal entities, nor at any point in between.

Based on concepts derived from modern computer science, we have developed a new method of studying the flow of time.  It is different from the classical statistical mechanical method of viewing continuous time flow in that we have described a hypothetical simulation of the universe by means of a gigantic digital state machine implemented in a gigantic computer.  This machine has the capability of mirroring the general  non-deterministic, microscopic behavior of the real universe

Based on these concepts, we have developed a new definition of absolute time as a measure of the count of discrete states of the universe that occurred from the beginning of the universe to some later time that might be under consideration.   In the real universe, we would use a high energy gamma ray as a clock to time the states, these states being determined by regular measurements of an object’s parameters by analog-to-digital samples taken at the clock frequency.

And based on this definition of time, it is clear that, without the physical universe to regularly change state, time has no meaning whatsoever.  That is, matter in the physical universe is necessary for time to exist. In empty space, or an eternal void, time would have utterly no meaning

This definition of time and its use in the simulation has permitted us to explore the nature of time flow in a statistical, non-determinate universe. This exploration included a consideration of the possibility of reverse time travel.  But by using the concept of a digital state machine as the basis of a thought experiment, we show clearly that to move backward in time, you would have to reverse the state count on the universal clock, which would have the effect of reversing the velocity of the objects. But this velocity includes the not only the velocity of the individual objects, but the composite velocities of all objects composing a macroscopic body. As a result, this macroscopic body would also reverse its velocity, providing the state was specified with sufficient precision.

But if you merely counted backward and obtained a reversal of motion, at best you could only move back to some probable past because of the indeterminate nature of the process.  You could not go back to some exact point in the past that is exactly the way it was.   In fact, after a short time, the process would be come so random that there would be no real visit to the past.  A traveler would be unable to determine if he was going back in time, or forward in time.  Entropy would continue to increase.

But doing even this in the real universe, of course, would present a problem because you would need naturally occurring, synchronized, discrete states (outside of quantized states, which are random and not universally synchronized).  You would need to be able to control a universal clock that counts these transitions, and further, cause it to go back to previous states simultaneously over the entire universe.   Modern physics has not found evidence of naturally occurring universal synchronized states, nor such an object as a naturally occurring clock that controls them.  And even if the clock were found, causing the clock to reverse the state transition sequence would be rather difficult.

Without these capabilities, it would seem impossible to envision time reversal by means of rewinding the universe.  This would not seem to be a possibility even in a microscopic portion of the universe, let alone time reversal over the entire universe.

But aside from those difficulties, if you wished to go back to an exact point in the past, the randomness of time travel by rewind requires need an alternative to rewinding the universe.  This is true for the simulated universe, and a hypothetical rewind of the real universe.  Therefore, the only way to visit an exact point in the past is to have a record of the entire past set of all states of the universe, from the point in the past that you wish to visit onward to the present.  This record must be stored somewhere, and a means of accessing this record, visiting it, becoming assimilated in it, and then allowing time to move forward from there must be available.  And, while all of this is happening in the past, the traveler’s departure point at the present state count, or time, must mover forward in time while the traveler takes his journey.

Even jumping back in time because of a wormhole transit would require that a record of the past be stored somewhere.  And, of course, the wormhole would need the technology to access these records, to place the traveler into the record and then to allow him to be assimilated there.  This would seem to be a rather difficult problem.

This then, is the problem with time travel to an exact point in the past in the real universe.  Where would the records be stored?  How would you access them in order just to read them?  And even more difficult, how would you be able to enter this record of the universe, become assimilated into this time period, and then and have your body begin to move forward in time.  At a very minimum our time traveler would have to have answers to these questions.

Still another conundrum is how the copy of the past universe would merge with the real universe at the traveler’s point of departure.  And then, if he had caused any changes that affected his departure point, they would have to be incorporated into that part of the universal record that is the future from his point of departure, and these changes would then have to be propagated forward to the real universe itself and incorporated into it.  This is assuming that the record is separate from the universe itself.

But if this hypothetical record of the universe were part of the universe itself,  or even the universe itself, then that would imply that all states of the entire universe, past, present, and future, exist in that record. This would further imply that we, as macroscopic objects in the universe, have no free will and are merely stepped along from state to state, and are condemned to carry out actions that we have no control over whatsoever.

In such a universe, if our traveler had access to the record, he might be able to travel in time.  But he were to be able to alter the record and affect the subsequent flow of time, he would have to have free will, which would seem to contradict the condition described above.  We obviously would be presented with endless recursive sequences that defy rationality in all of the above.

This is all interesting philosophy, but it seems to be improbable physics.

Therefore, in a real universe, and based on our present knowledge of physics, it would seem that time travel is highly unlikely, if not downright impossible.

We do not deny the usefulness of time reversal as a mathematical artifact in the calculation of subatomic particle phenomena.  However,  it does not seem possible even for particles to actually go backwards in time and influence the past and cause consequential changes to the present.

Further, there is no reason to believe that exceeding the speed of light would cause time reversal in either an individual particle or in a macroscopic body.  Therefore, any objections to tachyon models that are based merely on causality considerations have little merit.

For the sake of completeness, it should be commented that the construction of a computer that would accomplish the above feats exactly would require that the computer itself be part of the state machine. This could add some rather interesting problems in recursion that should be of interest to computer scientists.  And, it is obvious that the construction of such a machine would be rather substantial boon to the semiconductor industry.

We already know from classical statistical mechanics that increasing entropy dictates that the arrow of time can only move in the forward direction .  We have not only reaffirmed this principle here, but have gone considerably beyond it. These concepts would be extremely difficult, if not impossible, to develop with an analog, or continuous statistical mechanical model of the universe.

We have defined time on the basis of a state count based on the fastest changing object in the universe.  But it is interesting to note that modern day time is based on photons from atomic transitions, and is no longer based on the motion of the earth.  Conceptually, however, it is still an extension of earth based time.

But finally, history is filled with instances of individuals who have stated that various phenomena are impossible, only later to be proven wrong, and even ridiculous. Most of the technology that we take for granted today would have been thought to be impossible several hundred years ago, and some of it would have been thought impossible only decades ago.  Therefore, it is emphasized here that we do not say that time travel is absolutely impossible.  We will merely take a rather weak stance on the matter and simply say that, based on physics as we know it today, there are some substantial difficulties that must be overcome before time travel becomes a reality.

Teleportation: Impossible?

Ever since its invention in the 1920s, quantum physics has given rise to countless discussions about its meaning and about how to interpret the theory correctly. These discussions focus on issues like the Einstein-Podolsky-Rosen paradox, quantum non-locality and the role of measurement in quantum physics. In recent years, however, research into the very foundations of quantum mechanics has also led to a new field – quantum information technology. The use of quantum physics could revolutionize the way we communicate and process information.

The important new observation is that information is not independent of the physical laws used to store and processes it (see Landauer in further reading). Although modern computers rely on quantum mechanics to operate, the information itself is still encoded classically. A new approach is to treat information as a quantum concept and to ask what new insights can be gained by encoding this information in individual quantum systems. In other words, what happens when both the transmission and processing of information are governed by quantum laws?

The elementary quantity of information is the bit, which can take on one of two values – usually “0” and “1”. Therefore, any physical realization of a bit needs a system with two well defined states, for example a switch where off represents “0” and on represents “1”. A bit can also be represented by, for example, a certain voltage level in a logical circuit, a pit in a compact disc, a pulse of light in a glass fibre or the magnetization on a magnetic tape. In classical systems it is desirable to have the two states separated by a large energy barrier so that the value of the bit cannot change spontaneously.

Two-state systems are also used to encode information in quantum systems and it is traditional to call the two quantum states 0ñ and 1ñ. The really novel feature of quantum information technology is that a quantum system can be in a superposition of different states. In a sense, the quantum bit can be in both the 0ñ state and the 1ñ state at the same time. This new feature has no parallel in classical information theory and in 1995 Ben Schuhmacher of Kenyon College in the US coined the word “qubit” to describe a quantum bit.

A well known example of quantum superposition is the double-slit experiment in which a beam of particles passes through a double slit and forms a wave-like interference pattern on a screen on the far side. The essential feature of quantum interference is that an interference pattern can be formed when there is only one particle in the apparatus at any one time. A necessary condition for quantum interference is that the experiment must be performed in such a way that there is no way of knowing, not even in principle, which of the two slits the particle passed through on its way to the screen.

Single-particle quantum interference

Quantum interference can be explained by saying that the particle is in a superposition of the two experimental paths: passage through the upper slitñ and passage through the lower slitñ. Similarly a quantum bit can be in a superposition of 0ñ and 1ñ. Experiments in quantum information processing tend to use interferometers rather than double slits but the principle is the same (see right). So far single-particle quantum interference has been observed with photons, electrons, neutrons and atoms.

Beyond the bit

Any quantum mechanical system can be used as a qubit providing that it is possible to define one of its states as 0ñ and another as 1ñ. From a practical point of view it is useful to have states that are clearly distinguishable. Furthermore, it is desirable to have states that have reasonably long lifetimes (on the scale of the experiment) so that the quantum information is not lost to the environment through decoherence. Photons, electrons, atoms, quantum dots and so on can all be used as qubits. It is also possible to use both internal states, such as the energy levels in an atom, and external states, such as the direction of propagation of a particle, as qubits (see table).

The fact that quantum uncertainty comes into play in quantum information might seem to imply a loss of information. However, superposition is actually an asset, as can be seen when we consider systems of more than one qubit. What happens if we try to encode two bits of information onto two quantum particles? The straightforward approach would be to code one bit of information onto each qubit separately. This leads to four possibilities – 0ñ1 0ñ2 0ñ1 1ñ2 1ñ1 0ñ2 and 1ñ1 1ñ2 – where 0ñ1 1ñ2 describes the situation where the first qubit has the value “0” and second qubit has the value “1”, and so on. This approach corresponds exactly to a classical coding scheme in which these four possibilities would represent “00”, “01”, “10” and “11”.

However, quantum mechanics offers a completely different way of encoding information onto two qubits. In principle it is possible to construct any superposition of the four states described above. A widely used choice of superpositions is the so-called Bell states. A key feature of these states is that they are “entangled” (see box). Entanglement describes correlations between quantum systems that are much stronger than any classical correlations.

As in classical coding, four different possibilities can be represented by the four Bell states, so the total amount of information that can be encoded onto the two qubits is still two bits. But now the information is encoded in such a way that neither of the two qubits carries any well defined information on its own: all of the information is encoded in their joint properties. Such entanglement is one of the really counterintuitive features of quantum mechanics and leads to most of the paradoxes and other mysteries of quantum mechanics (seeBell’s inequality and quantum non-locality).

It is evident that if we wish to encode more bits onto quantum systems, we have to use more qubits. This results in entanglements in higher dimensions, for example the so-called Greenberger-Horne-Zeilinger (GHZ) states, which are entangled superpositions of three qubits (see further reading). In the state 1/2( 000ñ + 111ñ), for instance, all three qubits are either “0” or “1” but none of the qubits has a well defined value on its own. Measurement of any one qubit will immediately result in the other two qubits attaining the same value.

Although it was shown that GHZ states lead to violent contradictions between a local realistic view of the world and quantum mechanics, it recently turned out that such states are significant in many quantum-information and quantum-computation schemes. For example, if we consider 000 and 111 to be the binary representations of “0” and “7”, respectively, the GHZ state simply represents the coherent superposition (1/Ö2)( “0”ñ + “7”ñ). If a linear quantum computer has such a state as its input, it will process the superposition such that its output will be the superposition of the results for each input. This is what leads to the potentially massive parallelism of quantum computers.

It is evident that the basis chosen for encoding the quantum information, and the states chosen to represent 0ñ and 1ñ, are both arbitrary. For example, let us assume that we have chosen polarization measured in a given direction as our basis, and that we have agreed to identify the horizontal polarization of a photon with “0” and its vertical polarization with “1”. However, we could equally well rotate the plane in which we measure the polarization by 45º. The states in this new “conjugate” basis, 0´ñ and 1´ñ, are related to the previous states by a 45º rotation in Hilbert space

ñ = (1/Ö2)( 0ñ + 1ñ)

ñ = (1/Ö2)( 0ñ – 1ñ)

This rotation is known in information science as a Hadamard transformation. When spin is used to encode information in an experiment we can change the basis by a simple polarization rotation; when the directions of propagation are used, a beam splitter will suffice. It is important to note that conjugate bases cannot be used at the same time in an experiment, although the possibility of switching between various bases during an experiment – most notably between conjugate bases – is the foundation of the single-photon method of quantum cryptography.

Quantum dense coding

Entangled states permit a completely new way of encoding information, as first suggested by Charles Bennett of the IBM Research Division in Yorktown Heights, New York, and Stephen Wiesner of Brookline, Massachusetts, in 1992. Consider the four Bell states: it is clear that one can switch from any one of the four states to any other one by simply performing an operation on just one of the two qubits. For example, one can switch from Y+ñ to Yñ by simply applying a phase shift to the second qubit when it is “0” (i.e. 0ñ ® – 0ñ, 1ñ ® 1ñ). The state f+ñ can be obtained by “flipping” the second qubit, while the state fñ can be obtained by the combination of a phase shift and flipping.

All three of the operations are unitary and they do not change the total probability of finding the system in the states 0ñ and 1ñ. In working with Bell states it is common to refer to four unitary operations: the phase shift, the bit flip, the combined phase-shift/bit-flip, and the identity operator, which does not change the state on which it operates. All four operations are relatively easy to perform in experiments with photons, atoms and ions.

Quantum dense coding

To understand what this means, imagine that Bob wants to send some information to Alice. (The characters in quantum information technology are always called Alice and Bob.) Entanglement means that, in theory, Bob can send two bits of information to Alice using just one photon, providing that Alice has access to both qubits and is able to determine which of the four Bell states they are in (see left).

This scheme has been put into practice by my group in Innsbruck using polarization-entangled photons (see Mattle et al . in further reading). The experiment relies on the process of spontaneous parametric down-conversion in a crystal to produce entangled states of very high quality and intensity. The nonlinear properties of the crystal convert a single ultraviolet photon into a pair of infrared photons with entangled polarizations.


How to entangle photons

The experiment used quarter- and half-wave polarization plates (plates that shift the phase between the two polarization states of a photon by l/4 and l/2, respectively) to make the unitary transformations between the Bell states. In fact it is possible to identify only the Y+ñ and Yñ states uniquely using linear elements such as wave plates and mirrors. However, by being able to discriminate between three different possibilities – Y+ñYñ and f±ñ – Bob could send one “trit” of information with just one photon, even though the photon had only two distinguishable polarization states. It has been shown that a nonlinear quantum gate will be needed to distinguish between all four Bell states. Such a gate would depend on a nonlinear interaction between the two photons and various theoretical and experimental groups are working on this challenge.

Quantum teleportation

Quantum dense coding was the first experimental demonstration of the basic concepts of quantum communication. An even more interesting example is quantum teleportation.

Suppose Alice has an object that she wants Bob to have. Besides sending the object itself, she could, at least in classical physics, scan all of the information contained in the object and transmit that information to Bob who, with suitable technology, could then reconstitute the object. Unfortunately, such a strategy is not possible because quantum mechanics prohibits complete knowledge of the state of any object.

There is, fortunately, another strategy that will work. What we have to do is to guarantee that Bob’s object has the same properties as Alice’s original. And most importantly, we do not need to know the properties of the original. In 1993 Bennett and co-workers in Canada, France, Israel and the US showed that quantum entanglement provides a natural solution for the problem (see further reading).

Teleportation theory

In this scheme Alice wants to teleport an unknown quantum state  to Bob (see left). They both agree to share an entangled pair of qubits, known as the ancillary pair. Alice then performs a joint Bell-state measurement on the teleportee (the photon she wants to teleport) and one of the ancillary photons, and randomly obtains one of the four possible Bell results. This measurement projects the other ancillary photon into a quantum state uniquely related to the original. Alice then transmits the result of her measurement to Bob classically, and he performs one of the four unitary operations to obtain the original state and complete the teleportation.

It is essential to understand that the Bell-state measurement performed by Alice projects the teleportee qubit and her ancillary photon into a state that does not contain any information about the initial state of the teleportee. In fact, the measurement projects the two particles into a state where only relative information between the two qubits is defined and known. No information whatsoever is revealed about . Similarly, the initial preparation of the ancillary photons in an entangled state provides only a statement of their relative properties. However, there is a very clear relation between the ancillary photon sent to Bob and the teleportee photon. In fact, Bob’s photon is in a state that is related to Alice’s original photon by a simple unitary transformation.

Consider a simple case. If Alice’s Bell-state measurement results in exactly the same state as that used to prepare the ancillary photons (which will happen one time in four), Bob’s ancillary photon immediately turns into the same state as . Since Bob has to do nothing to his photon to obtain , it might seem as if information has been transferred instantly – which would violate special relativity. However, although Bob’s photon does collapse into the state  when Alice makes her measurement, Bob does not know that he has to do nothing until Alice tells him. And since Alice’s message can only arrive at the speed of light, relativity remains intact.

In the other three possible cases, Bob has to perform a unitary operation on his particle to obtain the original state, . It is important to note, however, that this operation does not depend at all on any properties of .


Quantum statistics

The main challenge in our experiment was to perform a Bell-state measurement on two particles that were generated independent of each other (see Bouwmeester et al. in further reading). Since a Bell-state measurement probes the collective or relative properties of two quantum particles, it is essential that the particles “forget” any information about where they were generated. To achieve this we must perform the experiment in such a way that we are unable, even in principle, to gain any path information. We do this by directing the two photons – the teleportee and Alice’s ancillary – through a semitransparent beam splitter from opposite sides (see right). We then ask a simple question: if the two particles are incident at the same time, how will they be distributed between the two output directions?

Since photons were used in the experiment, one might naively have expected that both photons would emerge in the same output beam. This feature, called “bunching”, is well known for bosons (particles with integer spin such as photons) and was demonstrated with optical beam splitters for the first time in 1987 by Leonard Mandel’s group at the University of Rochester in the US. Surprisingly, perhaps, bunching only happens for three of the Bell states. In contrast, for the Yñ state , the photons always leave in different beams. In other words, the photons “anti-bunch” and behave as if they were fermions. This is a direct consequence of interference.

This means that for the Yñ state we have no way of knowing which way the photons reached the detectors: both photons could have been transmitted by the beam splitter, or both could have been reflected. By adding various polarizers it is also possible to identify photons in the Y+ñ state. As with dense coding, however, quantum gates will be needed to make the experiment work with the f+ñ and fñ states.


Teleportation experiment

In the actual experiment the teleportee photon was also made using parametric down-conversion (see left). This provides an additional photon that can be used as a trigger informing us that the teleportee photon is “ready”. The experiment itself involved preparing the teleportee photon in various different polarization states – horizontal, vertical, +45º, -45º and right-hand circular – and proving that Bob’s photon actually acquired the state into which the teleportee photon was prepared. The teleportation distance was about 1 metre. Since the experiment was set up to identify only the Yñ state, the maximum success rate for teleportation was, in theory, 25%, and the measured success rate was much lower.

A related experiment was recently performed at the University of Rome La Sapienza (see Boschi et al . in further reading) using only one entangled pair of photons. In this experiment Alice was able to prepare Bob’s particle at a distance, again to within an unitary transformation. By essentially performing a Bell-state measurement on two properties of the same photon, the Rome group was able to distinguish between all four Bell states simultaneously.

The entanglement of photons over distances as great as 10 km has now been demonstrated at the University of Geneva, so teleportation is expected to work over similar distances. The teleportation of atoms should also be possible following a recent experiment at the Ecole Normale Supérieure in Paris that demonstrated that pairs of rubidium atoms could be entangled over distances of centimetres (see Hagley et al . in further reading).

Entanglement swapping

As mentioned above, an important feature is that teleportation provides no information whatsoever about the state being teleported. This means that any quantum state can be teleported. In fact, the quantum state does not have to be well defined; indeed, it could even be entangled with another photon. This means that a Bell-state measurement of two of the photons – one each from two pairs of entangled photons – results in the remaining two photons becoming entangled, even though they have never interacted with each other in the past (see right). Alternatively, one can interpret this “entanglement swapping” as the teleportation of a completely undefined quantum state (see Bose et al. in further reading). In a recent experiment in Innsbruck, we have shown that the other two photons from the two pairs are clearly entangled.

Quantum outlooks

We conclude by noting that, amazingly, the conceptual puzzles posed by quantum mechanics – and discussed for more than sixty years – have recently led to completely novel ideas that might even result in applications. These applications could include quantum communication, cryptography and computation. In return, these technological considerations lead to a better intuitive understanding of basic issues such as entanglement and the meaning of information on the quantum level. The whole field of quantum information technology is a classic example of basic physics and potential applications working hand in hand.

How to entangle photons
When an ultraviolet laser beam strikes a crystal of beta barium borate, a material with nonlinear optical properties, there is a small probability that one of the photons will spontaneously decay into a pair of photons with longer wavelengths (to conserve energy). The photons are emitted in two cones and propagate in directions symmetric to the direction of the original UV photon (to conserve momentum). In so-called type II parametric down-conversion, one of the photons is polarized horizontally and the other is polarized vertically. It is possible to arrange the experiment so that the cones overlap (see photograph). In this geometry the photons carry no individual polarizations  all we know is that the polarizations are different. This is an entangled state. (P G Kwiat et al. 1995 Phys. Rev. Lett75 4337)
Quantum dense coding
Bob can send two bits of information to Alice with a single photon if they share a pair of entangled photons. To begin, one photon each is sent to Alice and Bob. The photons are in one of the four Bell states. Bob then performs either one of the three unitary transformations on his photon, transferring the pair into another Bell state, or simply does nothing  the fourth option. He then sends the photon to Alice who measures the state of the pair. Since there are four possible outcomes of this measurement, Bob has sent twice as much information as can be sent classically with a two-state particle.
Quantum statistics
When two particles are incident symmetrically on a 50/50 beam splitter it is possible for both to emerge in either the same beam (left) or in different beams (right). In general, bosons (e.g. photons) emerge in the same beam, while fermions (e.g. electrons) emerge in different beams. The situation is more complex for entangled states, however, and two photons in the Bell state |Yñemerge in different beams. Therefore, if detectors placed in the two outward directions register photons at the same time, the experimenter knows that they were in a |Yñ state. Moreover, because we do not know the paths followed by the photons, they remained entangled.
Single-particle quantum interference
Interference fringes can be observed in a two-path interferometer if there is no way of knowing, not even in principle, which path the particle takes. In a Mach Zehnder interferometer a quantum particle strikes a beam splitter and has a 50/50 chance of being transmitted or reflected. Mirrors reflect both paths so that they meet at a second 50/50 beam splitter, and the numbers of particles transmitted/reflected by this beam splitter are counted. If no information about the path is available, the particle is in a superposition of the upper and lower paths. To observe interference one customarily varies the length of one of the paths (e.g. with a variable wave plate) and counts single clicks at the detectors as a function of this phase.
Teleportation theory
The scheme of Bennett et al. Alice wishes to teleport an unknown quantum state | (the “teleportee”) to Bob. They agree to share an entangled pair of “ancillary” photons. Alice then performs a Bell-state measurement on the teleportee and her ancillary photon, and obtains one of the four possible Bell results. This measurement also projects Bob’s ancillary photon into a well defined quantum state. Alice then transmits her result to Bob, who performs one of four unitary operations to obtain the original state.
Teleportation experiment
In the experiment a pulse from a mode-locked Ti:sapphire laser (not shown) is frequency doubled, and the resulting UV pulse is used to create the entangled pair of “ancillary” photons in a nonlinear crystal. The UV pulse is then reflected back through the crystal to produce a second pair of photons: one is used as the teleportee while the other acts as a “trigger”. Polarizers are used to prepare the teleportee photon in a well defined state, while various filters ensure that no path information is available about Alice’s photons. Coincident photons detections at f1 and f2 project the photons onto a |Yñ state (in which they have different polarizations). And since Alice’s ancillary and Bob’s photon are also entangled, this detection collapses Bob’s photon into an identical replica of the original.

[resource: Time Travel Research Center]

Video: Extradimensions and Origin Of Mass

Today I’m not going to publish a long and detailed article for you because there are reasons which are working behind this. Today I’m suffering from sickness, so I can’t publish detailed article. BTW, you can enjoy a video which is shadowing light on extradimensions and origin of mass..

and if this is not enough for you you can try the sidebar and archives or you can select you favourite category.
bruceleeeowe

Why Time Travel Is So Hard?

The notion of time travel is rooted in the early 20th century physics of Albert Einstein.  In the 1880s, two American scientists, Albert Michelson and Edward Morley set out to measure how the speed of light was affected by Earth’s motion through space. They discovered, to their amazement, that the speed of Earth made no difference to the passage of light through space (which they called the ether). No matter how fast you travel, the speed of light remained the same.

How could this be? Surely if you were travelling at half the speed of light and the beam from a torch passed you, the speed of the light from the torch would be seen as travelling slower than if you were stationary. The answer is definitely no! The speed of light is ALWAYS 300 million metres per second.

Einstein gave careful consideration to the observation. If light is always measured to be the same velocity, no matter how fast you are travelling, then something else must be changing to accommodate the speed of light. In 1905, Einstein formulated a theory he called ‘Special Relativity’ and described that ‘something else’ as ‘spacetime’. In order for light to remain constant, space and time (hence ‘spacetime’) have to vary and so they must be inextricably linked. It’s an extraordinary conclusion – but one that has passed every conceivable test for almost 100 years since it was proposed. Quite simply, as you approach light speed, time runs slower and space shortens.

The result is often referred to as the ‘twin paradox’. Imagine two twins on Earth. One takes a trip into space on a rocket that propels him very close to the speed of light. The other remains on Earth. For our space-travelling twin, time starts to slow as he reaches a speed very close to that of light. However, for the twin on Earth, time passes normally. Finally, when our space-travelling twin returns home years later, a strange thing has happened, his Earth-bound twin has gone grey. He has aged normally since his time was not slowed. Welcome to the realm of time travel.

There is a set of mathematical formulae known as the Lorenz transformation equations that can calculate the so-called ‘time dilation’ as one approaches the speed of light. All you simply do is plug in your speed and how long you travel for and bingo! You can calculate how much more time will pass on Earth while you are off gallivanting around the universe.

 

‘Ah! Fantastic,’ I hear you cry, ‘time travel is easy!’ All we need to do to travel into the future is leap on a spacecraft that travels at the speed of light, so that time onboard stops compared to time on Earth. And if we want to travel into the past, all we need do is put our foot down and accelerate beyond the speed of light, that way our clock would actually start running backwards.

Well, unfortunately it isn’t quite that easy. Einstein predicted with Special Relativity that nothing could travel faster than the speed of light. Why? It’s to do with mass. As you approach the speed of light, your mass increases. This is because space is shortening – it’s squeezing up, so to speak, and so any matter is also squeezed. Quite simply, the faster you go the more squeezing takes place and the heavier you become. And the heavier you become the more force you need to move; or to put it another way, the harder you have to push to go faster. At the speed of light your mass becomes infinite, and so does the energy required to push you. Sadly then, you’ll never travel at or faster than the speed of light.

While travelling at a velocity close to the speed of light (say 95% as in our example above) will allow you to travel into the future, travelling into the past seems impossible. But is it?

Special Relativity is so called because it deals with the special circumstance of what happens to mass, space and time when travelling in a straight line at a constant speed – it doesn’t take into consideration gravity and acceleration. In 1915, Einstein published his thesis on ‘General Relativity’. General Relatively does take into consideration gravity and acceleration.

Though Sir Isaac Newton discovered gravity, he couldn’t explain it. Einstein does explain it, within the formulae of General Relativity. This too has important consequences for time travel. Einstein proposed that mass distorts space, or more correctly the fourth dimension of spacetime. It’s simple to visualize if you think of space as a flat rubber sheet, like a trampoline. Placing a bowling ball on the trampoline will cause it to sag and become curved. The bowling ball distorts our two-dimensional rubber sheet. This is exactly what mass does to space.

The Sun is massive and so it curves space. Anything that passes close to the Sun will be affected by the curve, just as a marble rolled along our trampoline will be affected by the depression made by the mass of the bowling ball. The essence of General Relativity is that mass tells space how to bend and space tells matter how to move. The mass of the Sun, a planet or indeed any mass in the universe can bend the fabric of space. Furthermore, the greater the mass, the greater the distortion of space. This is important for time travel – because space and time are related, if you change one, you change the other. So if you bend space then you will bend time!

It’s apparent then that to bend time you must bend space – and to bend space you need a mass. Black holes are the biggest objects in the universe. They form when massive stars collapse at the end of their lives. They are thought to be ubiquitous within our galaxy. Black holes are so massive that they distort space to such a degree not even light can escape. Perhaps then, black holes can distort spacetime in such a way that we could move through them and end up in another place at another time.

Unfortunately not. As astronomer and leading black hole expert Kip Thorne pointed out, though travelling into a black hole does slow time, any traveller would be destroyed by any of a number of processes. In particular, by little bits of space matter that the black hole swallows and accelerates to the speed of light, increasing their mass incredibly, and then rains down on any spaceship destructively.

But Kip has had another idea pertaining to time travel. In the 1980s, astronomer Carl Sagan phoned Kip Thorne for help. Sagan was writing his novel, Contact, and needed a scientifically viable way for his heroine to travel to the star Vega. Vega is some 26 light-years from Earth, so even travelling at the speed of light (which we know is impossible) it would take 26 years to reach the star. Sagan wanted his heroine to get there in a matter of hours. Thorne confirmed Sagan’s suspicions that a black hole wouldn’t work. But he came up with another suggestion – the wormhole.

A wormhole is a theoretical shortcut for travel between two points in the universe. It works like this. Imagine the universe to be two-dimensional (like the rubber sheet below). The quickest way to get from Earth to Vega (both of which lie on the rubber sheet) is to travel in a straight line between the two. Right? Wrong! If our rubber sheet happened to be folded over so that Vega and Earth are opposite each other, Earth lying above Vega, then the quickest way would be to bridge the gap between them by burrowing a hole (the wormhole) from Earth down to Vega. This wormhole may only be a few kilometres in length, yet it would cut out the 26 light-year distance along the surface of our rubber sheet.

 The implications of wormholes for time travel are astounding. If you can travel from Earth to Vega in just minutes you are effectively travelling faster than the speed of light. Light takes 26 years going the slow way through spacetime, while you just nip through the hyperspace between in minutes. Anyone who travels faster than light has the ability to travel back in time.

While wormholes may exist in the sub-atomic (quantum) realm, getting one to form, grow large enough, and remain open long enough so we may use it is a formidable task and one for a very distant civilisation.

The truth about time travel is that while, rather excitingly, current physics does allow time travel, albeit with a number of caveats, the physical practicality seems ages away. But once mastered, perhaps this famous limerick, will become reality. Only time will tell.

%d bloggers like this: