Invisibility Cloak comes Closer to Reality!

There has been no shortage of column inches devoted to invisibility cloaks since engineers built the first one back in 2006. This was an impressive device but it had some important limitations, not least of which was that it worked only for a single frequency of microwaves. One of the biggest questions that physicists have puzzled over since then is whether it is possible to build similar devices that work over the range of frequencies visible to the human eye. Last year, a couple of groups announced a solution to this problem in the form of ‘carpet cloaks’ that lie over an object, hiding its presence over a range of optical frequencies.

It wasn’t long ago that some physicists said that optical invisibility cloaks would be impossible to build. Earlier, two teams claimed to have built cloaks that work over a wide range of optical wavelengths, and the extraordinary thing is that both designs are almost identical. Invisibility cloaks work by steering light around an object, fooling an observer who sees nothing but the background view. But while this works well for microwaves, it is not a straightforward matter to shrink these cloaks to a size that works at optical wavelengths.

They have both built cloaks that are essentially mirrors with a tiny bump in which an object can hide. The cloaking occurs because the mirrors look entirely flat. The bump is hidden by a pattern of tiny silicon nanopillars on the mirror surface that steers reflected light in a way that makes any bump look flat. So anything can be hidden beneath the bump without an observer realizing that it is there, like hiding a small object under a thick carpet.

Again, these were impressive feats but with some limitations. These cloaks are made of finely carved silicon microstructures and so were expensive to build. And they can only hide objects upto a few micrometres in size, not much bigger than the wavelength of light itself. Now, Baile Zhang at the Massachusetts Institute of Technology in Cambridge at a couple of buddies have done significantly better. They’ve built acarpet cloak capable of hiding objects in the millimetre range over a broad range of visible frequencies from red to blue. More impressive than this is that they’ve built this cloak out of calcite, an ordinary and relatively cheap optical material, using conventional optical lens fabrication techniques. This makes the cloak cheap and easy to build.

Carpet cloaks sit on a surface covering the object to be hidden. Their trick is to make it look as if light is reflecting off this surface, thereby hiding any object that they cover.Until now, this has only been done using artificially modified structures that steers light in a specially engineered ways. This so-called metamaterial is a kind of wonder substance that is the focus of great attention right now.

However, Zhang and co realised that there are naturally occurring materials that can do the same thing. Calcite is one of them. It is unusual because its optical properties depend on the direction that light passes through it. By carefully exploiting this property, they’ve been able to create a block of calcite (actually two blocks of calcite) that acts like a carpet cloak. They’ve even demonstrated it by hiding a wedge of steel 38 mm long and 2 mm high. Zhang and co say that this is the first time that a visible object has ever been cloaked. That’s impressive.Their cloak has its limitations, of course. The main one is that it only works in a single 2D plane, so the object is hidden only to those looking from a certain direction.

Another is that it works only with polarised light. But that’s not as limiting as it may seem at first sight. Water tends to polarise light so it seems reasonable to think that the cloak ought to work well underwater. It wasn’t so long ago that some physicists were saying that optical invisibility cloaks would always be impossible (because metamaterials tend to absorb visible light faster than they can transmit it). That’s turned out to be of little concern and invisibility cloaks just get better and better. In fact, it’s hard to think of a technology that has advanced so far, so quickly.

[Ref: Macroscopic Invisible Cloak for Visible Light]

[Source: TechnologyReview]

Why the Universe is Made of Matter: CERN Exploring to Answer?

Scientists at CERN said they’ve trapped dozens of anti-hydrogen atoms, a technical breakthrough that will allow them to explore why the universe is made of matter. Under a theory expounded in 1931 by the eccentric British physicist Paul Dirac, when energy transforms into matter, it produces a particle and its mirror image – called an anti-particle – which holds the opposite electrical charge. When particles and anti-particles collide, they annihilate each other in a small flash of energy. Since then, physicists have wondered why the universe seems to be dominated by matter and not antimatter. If everything were equal at the birth of the cosmos, matter and anti-matter would have existed in the same quantities. The observable universe would have had no chance of coming into being, as these opposing particles would have wiped each other out.

Trapping anti atoms Technical Feat

Understanding why there is this huge imbalance presents a daunting technical challenge. Until now, experiments have produced anti-atoms, namely of hydrogen, but only in a free state. That means they instantly collide with ordinary matter and get annihilated, making it impossible to measure them or study their structure. In a paper published in the British journal Nature, a team at the European Organisation for Nuclear Research (CERN) in Geneva explain a method of snaring these so-called antihydrogen atoms.

Experiments conducted in its ALPHA laboratory found a way of using strong, complex magnetic fields and a vacuum to capture and hold the mirror-image particles apart from ordinary matter. Thousands of antihydrogen atoms have been made in the lab, but in the most successful experiment so far, 38 have been trapped long enough – one tenth of a second – for them to be studied.

For reasons that no-one yet understands, nature ruled out antimatter. It is thus very rewarding, and a bit overwhelming, to look at the ALPHA device and know that it contains stable, neutral atoms of antimatter. This inspires us to work that much harder to see if antimatter holds some secret.

Various Aspects of Exotic Propulsion Systems

Travelling into the dark is too hazardous especially if you wish to contact extraterrestrial civilization–even at a high relativistic speed, it would take almost 21 years to reach to the nearest earth like planet, Gliese 581 g. Various mechanisms have been proposed to make it possible within our short life times however, none of them are either feasible to accomplish the mammoth task of interstellar travel. Almost all of them have already been reviewed on WeirdSciences[select category ‘quantum physics/astrophysics’].

One means to produce force is collisions. Conventional rocket propulsion is fundamentally based on the collisions between the propellant and the rocket. These collisions thrust the rocket in one direction and the propellant in the other.To entertain the analogy of collision forces for a space drive, consider the supposition that space contains a background of some form of isotropic medium that is constantly impinging on allsides of a vehicle. This medium could be a collection of randomly moving particles or electromagnetic waves, either of which possess momentum. If the collisions on the front of a vehicle could be lessened and/or the collisions on the back enhanced, a net propulsive force would result. We know that dark matter and negative energy are ubiquitous in this universe. Quantum fluctuations are more optimistic approach to get a picture of future propulsion technologies. For any of these concepts to work, there must be a real background medium in space. This medium must have a sufficiently large energy or mass density, must exist equally and isotropically across all space, and there must be a controllable means to alter the collisions with this medium to propel the vehicle. A high energy or mass density is required to provide sufficient radiation pressure or reaction momentum within a reasonable sail area. The requirement that the medium exist equally and isotropically across all space is to ensure that the propulsion device will work anywhere and in any direction in space. The requirement that there must be a controllable means to alter the collisions ensures that a controllable propulsive effect can be created.

Critical Issues

The critical issues for both the sail and field drives have been compiled into the problem statement offered below. Simply put, a space drive requires some controllable and sustainable means to create asymmetric forces on the vehicle without expelling a reaction mass, and some means to satisfy conservation laws in the process. Regardless of which concept is explored, the following criteria must be satisfied.

(1) A mechanism must exist to interact with a property of space, matter, or energy which satisfies these conditions:
(a) must be able to induce an unidirectional acceleration of the vehicle.
(b) must be controllable.
(c) must be sustainable as the vehicle moves.
(d) must be effective enough to propel the vehicle.
(e) must satisfy conservation of momentum.
(f) must satisfy conservation of energy.

(2.1) If properties of matter or energy are used for the propulsive effect, this matter or energy…
(a) must have properties that enable conservation of momentum in the propulsive process.
(b) must exist in a form that can be controllably collected, carried, and positioned on the vehicle, or be controllably created on the vehicle.
(c) must exist in sufficiently high quantities to create a sufficient propulsive effect.

(2.2) If properties of space are used for the propulsive effect, these properties…
(a) must provide an equivalent reaction mass to conserve momentum.
(b) must be tangible; must be able to be detected and interacted with.
(c) must exist across all space and in all directions.
(d) must have a sufficiently high equivalent mass density within the span of the vehicle to be used as a propulsive reaction mass.
(e) must have characteristics that enable the propulsive effect to be sustained once the vehicle is in motion.
(3) The physics proposed for the propulsive mechanism and for the properties of space, matter, or energy used for the propulsive effect must be completely consistent with empirical observations.

Now it depend on us what kind of propulsion technology might be dispensable according to future needs.
[Ref: Challenge to Create the Space Drive by Millis M.]

Promising Applications of Carbon Nano Tubes(CNT)

Scientific research on carbon nanotubes witnessed a large expansion. The fact that CNTs can be produced by relative simple synthesis processes and their record breaking properties lead to numerous demonstration of many different types of applications ranging from building fast field effect transistors, flat screens, transparent electrodes, electrodes for rechargeable batteries, conducting polymer composites, bullet proof textiles and transparent loudspeakers.

As a result we have seen enormous progress in controlling growth of CNTs. CNTs with controlled diameter can be grown along a given direction parallel to the surface or perpendicular to the surface. In recent years we have seen narrow diameter CNTs with 2 and more walls to be grown at high yield.

Behind this progress one might forget about the remaining tantalizing challenges. Samples of CNTs still contain a large amount of disordered forms of carbon, catalytic metal particles or parts of the growth support are still a large fraction of the carbon nanotube mass. CNT as produced continue to contain a relative wide dispersion of diameters and lengths. To disperse CNTs and control their distribution in a matrix or on a given surface is still a challenge. There has been enormous progress in size selecting CNTs. However, the often applied techniques limit application to due to the presence of surfactant molecules or cannot be applied to larger volumes.

Analytical techniques are playing an important role when developing new synthesis, purification and separation processes. Screening of carbon nanotubes is essential for any real world application but is also essential for their fundamental understanding such as the understanding the effect of tube bundling, doping and the role of defects.

At the ‘Centre for materials elaboration and structural studies‘, Professor Wolfgang Bacsa and Pascal Puech and have much focused in screening CNTs with optical methods and developing physical processes for carbon nanotubes working closely with the materials chemists at different local institutions. We have much focused our attention on double wall carbon nanotubes grown form the catalytic chemical vapour deposition technique.

Their small diameter, high electrical conductivity and their large length as well as the fact that the inner wall is protected from the environment by the outer wall, are all good attributes for incorporating them in polymer composites. Depending on the synthesis process used we find the two walls are at times strongly or weakly coupled.

By studying their Raman spectra at high pressure, in acids, strongly photo excited or on individual tubes we can observe the effect on the internal and external walls. A good knowledge of Raman spectra of double wall CNTs gives us the opportunity to map the Raman signal of the ultra thin slices of composites and determine the distribution, agglomeration state and interaction with the matrix.

TEM images (V Tishkova CEMES-CNRS) of double wall (a), industrial multiwall carbon nanotubes (b) and Raman G band of double wall CNTs at high pressure in different pressure media revealing molecular nanoscal pressure effects7.

Working on individual CNTs in collaboration with Anna Swan of Boston University, gave us the opportunity to work on precisely positioned individual suspended CNTs. An individual CNT is an ideal point source and this can be used to map out the focal spot and to learn about the fundamental limitations of high resolution grating spectrometers.

The field of carbon nanotube research has grown enormously during the last decade making it difficult to follow all the new results in this field. It is quite clear that applications where macroscopic amounts of CNTs are needed, standardisation of measurement protocols, classification of CNT samples, combined with new processing techniques to deal with large CNT volumes will be needed. Applications where only minute quantities on a surface are used, suffer from the fact that no parallel processing is still limited. This shows further progress in growing CNT on surfaces is still needed although they have been a recent break through in growing CNT in a parallel fashion and with preferential seminconducting or metallic tubes.

[SOURCE: azonano]

Chronology Protection Conjecture: Where are They?

“Where are they?” It becomes necessary to ponder about time travellers if you are going to discuss time travel. Well, if time travel is possible, where are the time travellers from future? What if a time machine is handled over you? Where would you like to go first in time dimension? Of course, your answer would be ‘in the past’!! As you know quantum physics allows time travel with given circumstances.[See, Time Travel]

I have already peer reviewed a lot of research paper in regards to time travel. Consider, if our civilization might have been survived for next six centuries and fore, we would have reached to the stage of post singularity and tending to become a typeII civilization. Since every time travelling trick acquire highly risky and pesky technologies like traversable wormholes, tipler cylinder, closed time like curve,macroscopic casimir effect(filled with high negative energy densities) so it is fairly reasonable to assume that building a time machine is just a engineering problem. I bet if we have ensured our survival to the next ten centuries, we would have all such technologies which are today viewed as the challenge. So I’ll assume that our future descendents have such technology which would allow them to travel back and forth in time. Now, a obvious question which knock down our brain is-“where are they?”
Well explanation can be offered in two ways:
1.Chronology Protection Conjecture(CPC):This was first proposed by Pro. Hawking in 1992. Well CPC is nothing but a metaphorical device which prevents the macroscopic time travel. The idea of chronology protection gained considerable currency during the 1990’s when it became clear that traversable wormholes,which are not too objectionable in their own right seem almost generically to lead to the creation of time machines. Why CPC is a key issue? In Newtonian physics, and even in special relativity or flat-space quantum field theory, notions of chronology and causality are so fundamental that they are simply built into the theory ab initio. Violation of normal chronology (for instance, an eect preceding its cause)is so objectionable an occurrence that any such theory would immediately be rejected as unphysical. Unfortunately, in general relativity one can’t simply assert that chronology is preserved, and causality respected, without doing considerable ad-ditional work. The essence of the problem lies in the fact that the Einstein equations of general relativity are local equations, relating some aspects of the spacetime curvature at a point to the presence of stress-energy at that point. Additionally, one also has local chronology protection, inherited from the fact that the spacetime is locally Minkowskian(the Einstein Equivalence Principle), and so in the small general relativity respects all of the causality constraints of special relativity. There is quantum like thingy known as Cauchy horizon problem which prevent the formation of closed time like curve but that could be easily overrun by Casimir effect(exception of Cauchy horizon problem). One side of the horizon contains closed space-like geodesics and the other side contains closed time-like geodesics. When waves traveling in Misner space pass through the identification world line. As this happens infinitely many times while approaching the horizon, the stress-energy tensor diverges at the horizon. Presumably, this prevents space time from developing closed time-like curve ts that would otherwise be feasible. Thus, CPC is protected.
2.Multiverse Theory: This could also be proposed as a possible strategy to explain the paradox. There are infinite universes having every single possibility. So, it is possible that indeed, time travellers from future might be lurking into the past enjoying dinosaur riding but in a different parallel universe. Thusly, it resolves the paradox.

3.No Time Machine can be engineered: Probably this is the simplest solution to the paradox. The parody with time travel is that it always involves something that is way beyond our technology or even rhetorical ideas. General requirements of a time machine are tipler cylinders, black holes, warp drives, wormholes, Kerr Neumann black holes, BPS black holes etc. which won’t seem feasible by any means of technology. Perhaps, which is why there are no time traveller intruding into the past to get better resources, appaling Homo Sapiens.

Hyperluminal Travel Without Exotic Matter

Listen terrestrial intelligent species: WeirdSciences is going to delve into a new idea of making interstellar travel feasible and this time no negative energy is to be used to propel spacecraft. Though implementing  negative energy to make warp drive is not that bad, but you need to refresh your mind.

By Eric Baird

Alcubierre’s 1994 paper on hyperfast travel has generated fresh interest in the subject of warp drives but work on the subject of hyper-fast travel is often hampered by confusion over definitions — how do we define times and speeds over extended regions of spacetime where the geometry is not Euclidean? Faced with this problem it may seem natural to define a spaceship’s travel times according to round-trip observations made from the traveller’s point of origin, but this “round-trip” approach normally requires signals to travel in two opposing spatial directions through the same metric, and only gives us an unambiguous reading of apparent shortened journey-times if the signal speeds are enhanced in both directions along the signal path, a condition that seems to require a negative energy density in the region. Since hyper-fast travel only requires that the speed of light-signals be enhanced in the actual direction of travel, we argue that the precondition of bidirectionality (inherited from special relativity, and the apparent source of the negative energy requirement), is unnecessary, and perhaps misleading.

When considering warp-drive problems, it is useful to remind ourselves of what it is that we are trying to accomplish. To achieve hyper-fast package delivery between two physical markers, A (the point of origin)  and B (the destination), we require that a package moved from A to B:

a) . . . leaves A at an agreed time according to clocks at A,
b) . . . arrives at B as early as possible according to clocks at B, and, ideally,
c) . . . measures their own journey time to be as short as possible.

From a purely practical standpoint as “Superluminal Couriers Inc.”, we do not care how long the arrival event takes to be seen back at A, nor do we care whether the clocks at A and B appear to be properly synchronised during the delivery process. Our only task is to take a payload from A at a specified “A time” and deliver it to B at possible “B time”, preferably without the package ageing excessively en route. If we can collect the necessary local time-stamps on our delivery docket at the various stages of the journey, we have achieved our objective and can expect payment from our customer.

Existing approaches tend to add a fourth condition:

d) . . . that the arrival-event at B is seen to occur as soon as possible by an observer back at A.

This last condition is much more difficult to meet, but is arguably more important to our ability to define distant time-intervals than to the actual physical delivery process itself. It does not dictate which events may be intersected by the worldline of the travelling object, but can affect the coordinate labels that we choose to assign to those events using special relativity.

  • Who Asked Your Opinion?

If we introduce an appropriate anisotropy in the speed of light to the region occupied by our delivery path, a package can travel to its destination along the path faster than “nominal background lightspeed” without exceeding the local speed of light along the path. This allows us to meet conditions 2(a) and 2(b), but the same isotropy causes an increased transit time for signals returning from B to A, so the “fast” outward journey can appear to take longer when viewed from A.

This behaviour can be illustrated by the extreme example of a package being delivered to the interior of a black hole from a “normal” region of spacetime. When an object falls through a gravitational event horizon, current theory allows its supposed inward velocity to exceed the nominal speed of light in the external environment, and to actually tend towards vINWARDS=¥ as the object approaches a black hole’s central singularity. But the exterior observer, A, could argue that the delivery is not only proceeding more slowly than usual, but that the apparent delivery time is actually infinite, since the package is never actually seen (by A) to pass through the horizon.

Should A’s low perception of the speed of the infalling object indicate that hyperfast travel has not been achieved? In the author’s opinion, it should not — if the package has successfully been collected from A and delivered to B with the appropriate local timestamps indicating hyperfast travel, then A’s subsequent opinion on how long the delivery is seen to take (an observation affected by the properties of light in a direction other than that of the travelling package) would not seem to be of secondary importance. In our “black hole” example, exotic matter or negative energy densities are not required unless we demand that an external observer should be able to see the package proceeding superluminally, in which case a negative gravitational effect would allow signals to pass back outwards through the r=2M surface to the observer at the despatch-point (without this return path, special relativity will tend to define the time of the unseen delivery event as being more-than-infinitely far into A’s future).

Negative energy density is required here only for appearances sake (and to make it easier for us to define the range of possible arrival-events that would imply that hyperfast travel has occurred), not for physical package delivery.

  • Hyperfast Return Journeys and Verification

It is all very well to be able to reach our destination in a small local time period, and to claim that we have travelled there at hyperfast speeds, but how do we convince others that our own short transit-time measurements are not simply due to time-dilation effects or to an “incorrect” concept of simultaneity? To convince observers at our destination, we only have to ask that they study their records for the observed behaviour of our point of origin — if the warpfield is applied to the entire journey-path (“Krasnikov tube configuration”), then the introduction and removal of the field will be associated with an increase and decrease in the rate at which signals from A arrive at B along the path (and will force special relativity to redefine the supposed simultaneity of events at B and A). If the warpfield only applies in the vicinity of the travelling package, other odd effects will be seen when the leading edge of the warpfield reaches the observer at B (the logical problems associated with the conflictng “lightspeeds” at the leading edge of a travelling warpfield wavefront been highlighted by Low, and will be discussed in a further paper). Our initial definitions of the distances involved should of course be based on measurements taken outside the region of spacetime occupied by the warpfield.

A more convincing way of demonstrating hyper-fast travel would be to send a package from A to B and back again in a shorter period of “A-time” than would normally be required for a round-trip light-beam. We must be careful here not to let our initial mathematical definitions get in the way of our task — although we have supposed that the speed of light towards B was slower while our warpdrive was operating on the outward journey, this artificially-reduced return speed does not have to also apply during our subsequent return trip, since we have the option of simply switching the warpdrive off, or better still, reversing its polarity for the journey home.

Although a single path allowing enhanced signal speeds in both directions at the same time would seem to require a negative energy-density, this feature is not necessary for a hyper-fast round trip — the outward and return paths can be separated in time (with the region having different gravitational properties during the outward and return trips) or in space (with different routes being taken for the outward and return journeys).

  • Caveats and Qualifications

Special relativity is designed around the assumption of Euclidean space and the stipulation that lightspeed is assumed to be isotropic, and neither of these assumptions is reliable for regions of spacetime that contain gravitational fields.

If we have a genuine lightspeed anisotropy that allows an object to move hyper-quickly between A and B, special relativity can respond by using the round-trip characteristics of light along the transit path to redefine the simultaneity of events at both locations, so that the “early” arrival event at B is redefined far enough into A’s future to guarantee a description in which the object is moving at less than cBACKGROUND.
This retrospective redefinition of times easily leads to definitional inconsistencies in warpdrive problems — If a package is sent from A to B and back to A again, and each journey is “ultrafast” thanks to a convenient gravitational gradient for each trip, one could invoke special relativity to declare that each individual trip has a speed less than cBACKGROUND, and then take the ultrafast arrival time of the package back at A as evidence that some form of reverse time travel has occurred , when in fact the apparent negative time component is an artifact of our repeated redefinition of the simultaneity of worldlines at A and B. Since it has been known for some time that similar definitional breakdowns in distant simultaneity can occur when an observer simply alters speed (the “one gee times one lightyear” limit quoted in MTW ), these breakdowns should not be taken too seriously when they reappear in more complex “warpdrive” problems.

Olum’s suggested method for defining simultaneity and hyperfast travel (calibration via signals sent through neighbouring regions of effectively-flat spacetime) is not easily applied to our earlier black hole example, because of the lack of a reference-path that bypasses the gravitational gradient (unless we take a reference-path previous to the formation of the black hole), but warpdrive scenarios tend instead to involve higher-order gravitational effects (e.g. gradients caused by so-called “non-Newtonian” forces ), and in these situations the concept of “relative height” in a gravitational field is often route-dependent (the concept “downhill” becomes a “local” rather than a “global” property, and gravitational rankings become intransitive). For this class of problem, Olum’s approach would seem to be the preferred method.

  • What’s the conclusion?

In order to be able to cross interstellar distances at enhanced speeds, we only require that the speed of light is greater in the direction in which we want to travel, in the region that we are travelling through, at the particular time that we are travelling through it. Although negative energy-densities would seem to be needed to increase the speed of light in both directions along the same path at the same time, this additional condition is only required for any hyperfast travel to be “obvious” to an observer at the origin point, which is a stronger condition than merely requiring that packages be delivered arbitrarily quickly. Hyperfast return journeys would also seem to be legal (along a pair of spatially separated or time-separated paths), as long as the associated energy-requirement is “paid for” somehow. Breakdowns in transitive logic and in the definitions used by special relativity already occur with some existing “legal” gravitational situations, and their reappearance in warpdrive problems is not in itself proof that these problems are paradoxical.

Arguments against negative energy-densities do not rule out paths that allow gravity-assisted travel at speeds greater than cBACKGROUND, provided that we are careful not to apply the conventions of special relativity inappropriately. Such paths do occur readily under general relativity, although it has to be admitted that some of the more extreme examples have a tendency to lead to unpleasant regions (such as the interiors of black holes) that one would not normally want to visit.

[Ref: Miguel Alcubierre, “The warp drive: hyper-fast travel within general relativity,”  Class. Quantum Grav. 11 L73-L77 (1994), Michael Spzir, “Spacetime hypersurfing,”  American Scientist 82 422-423 (Sept/Oct 1994), Robert L. Forward, “Guidelines to Antigravity,”  American Journal of Physics 31 (3) 166-170 (1963). ]

Video: The God Particle

Higgs boson are very interesting part of particle physics. These are weakly interacting particles something like dark matter. Watch the video. It will enlighten you to some extent.

enjoy!! Today I’ll publish a article about Interstellar Travel, somehow making it feasible utilizing existing technology.

Video: The Time Traveller, Real Time Machine and True Story of Philadelphia Experiment

Ever got fascinated to time travel? If not, be now! Here is a man who claims that he is a time traveller and he had evidence of time travel. Watch this video:

Well, I don’t find anything credible enough to prove him time traveller.
Now come to the reality. Possibly we would have a quantum time machine in near future. By warping space-time fabric using cris-cross of high intensity laser beams, could make teleportation of a elementary particle into the past, possible. Watch it.

Probably you know about secret Philadelphia experiment conducted in 1943 on a ship named U.S.S. Enridge. It is said that ship was teleported using tesla coils which generated a very high electro-magnetic field and warped space time fabric, a special manifestation of relativity theory unknown yet. See the reality!…?

New Silicon Nanowires Could Make Photovoltaic Devices More Efficient

Future energy crisis is not a single problem alone, various challenges are before us which haven’t been solved yet. Well, this article will not cover future energy challenges but it would suggest a solution to future energy crisis based on recent research. Our whole energy problem would be solved if  we could optimize solar cells more efficiently. What we need as energy source, is ultimately electricity, no matter how you are getting it. Either get it from nuclear resources, water or solar cells. Well, solar cells could solve our problems more efficiently and solar cells are potential source of clean and renewable energy. Photovoltaic cell[PV Devices] are excellent as our future energy source.

Although early photovoltaic (PV) cells and modules were used in space and other off-grid applications where their value is high, currently about 70% of PV is grid- connected which imposes major cost pressures from conventional sources of electricity. Yet the potential benefits of its large-scale use are enormous and PV now appears to be meeting the challenge with annual growth rates above 30% for the past five years.

More than 90% of PV is currently made of Si modules assembled from small 4-12 inch crystalline or multicrystalline wafers which, like most electronics, can be individually tested before assembly into modules. However, the newer thin-film technologies are monolithically integrated devices approximately 1 m2 in size which cannot have even occasional shunts or weak diodes without ruining the manufacturing yield. Thus, these devices require the deposition of many thin semiconducting layers on glass, stainless steel or polymer and all layers must function well a square meter at a time or the device fails. This is the challenge of PV technology-high efficiency, high uniformity, and high yield over large areas to form devices that can operate with repeated temperature cycles from —40 C to 100 C with a provable twenty- year lifetime and a cost of less than a penny per square centimeter.

[Image Details: Typical construction of solar cell]

Solar cells work and they last. The first cell made at Bell Labs in 1954 is still functioning. Solar cells continue to play a major role in the success of space exploration—witness the Mars rovers. Today’s commercial solar panels, whether of crystalline Si, thin amorphous, or polycrystalline films, are typically guaranteed for 20 years—unheard of reliability for a consumer product. However, PV still accounts for less than 10-5 of total energy usage world-wide. In the US, electricity produced by PV costs upwards of $0.25/ kW-hr whereas the cost of electricity production by coal is less than $0.04 /kW-hr.

It seems fair to ask what limits the performance of solar modules and is there hope of ever reaching cost- competitive generation of PV electricity?

The photogeneration of a tiny amount of current was first observed by Adams and Day in 1877 in selenium. However, it was not until 1954 that Chapin, Fuller and Pearson at Bell Labs obtained significant power generation from a Si cell. Their first cells used a thick lithium-doped n-layer on p-Si, but efficiencies rose well above 5% with a very thin phosphorous-doped n-Si layer at the top.

The traditional Si solar cell is a homojunction device. The sketch of first image indicates the typical construction of the semiconductor part of a Si cell. It might have a p-type base with an acceptor (typically boron or aluminum) doping level of NA = 1 x 1015 cm-3 and a diffused n-type window/emitter layer with ND = 1 x 1020 cm-3 (typically phosphorus). The Fermi level of the n-type (p-type) side will be near the conduction (valence) band edge so that donor-released electrons will diffuse into the p-type side to occupy lower energy states there, until the exposed space charge (ionized donors in the n-type region, and ionized acceptors in the p-type) produces a field large enough to prevent further diffusion. Often a very heavily doped region is used at the back contact to produce a back surface field (BSF) that aids in hole collection and rejects electrons.

In the ideal case of constant doping densities in the two regions, the depletion width, W, is readily calculated from the Poisson equation, and lies mostly in the lightly doped p-type region. The electric field has its maximum at the metallurgical interface between the n- and p-type regions and typically reaches 104 V/cm or more. Such fields are extremely effective at separating the photogenerated electron-hole pairs. Silicon with its indirect gap has relatively weak light absorption requiring about 10-20μm of material to absorb most of the above-band-gap, near-infrared and red light. (Direct-gap materials such as GaAs, CdTe, Cu(InGa)Se2 and even a-Si:H need only ~0.5μm or less for full optical absorption.) The weak absorption in crystalline Si means that a significant fraction of the above-band-gap photons will generate carriers in the neutral region where the minority carrier lifetime must be very long to allow for long diffusion lengths. By contrast, carrier generation in the direct-gap semiconductors can be entirely in the depletion region where collection is through field-assisted drift.

A high quality Si cell might have a hole lifetime in the heavily doped n-type region of tp=1μs and corresponding diffusion length of Lp=12 μm, whereas, in the more lightly doped p-type region the minority carriers might have tn=350 μs and Ln=1100 μm. Often few carriers are collected from the heavily doped, n-type region so strongly-absorbed blue light does not generate much photocurrent. Usually the n-type emitter layer is therefore kept very thin. The long diffusion length of electrons in the p- type region is a consequence of the long electron lifetime due to low doping and of the higher mobility of electrons compared with holes. This is typical of most semiconductors so that the most common solar cell configuration is an “n-on-p” with the p-type semiconductor serving as the “absorber.”

[Image Details: Triple junction cell]

Current generated by an ideal, single-junction solar cell is the integral of the solar spectrum above the semiconductor band gap and the voltage is approximately 2/3 of the band gap. Note that any excess photon energy above the band edge will be lost as the kinetic energy of the electron and hole pair relaxes to thermal motion in picoseconds and simply heats the cell. Thus single-junction solar cell efficiency is limited to about 30% for band gaps near 1.5 eV. Si cells have reached 24%.

This limit can be exceeded by multijunction devices since the high photon energies are absorbed in wider-band-gap component cells. In fact the III-V materials, benefiting like Si from research for electronics have been very successful at achieving very high efficiency, reaching 35% with the monolithically stacked three-junction, two-terminal structure, GaInP/GaAs/Ge. These tandem devices must be critically engineered to have exactly matched current generation in each of the junctions and therefore are sensitive to changes in the solar spectrum during the day. A sketch of a triple-junction, two-terminal amorphous silicon cell is shown in second image.

Polycrystalline and amorphous thin-film cells use inexpensive glass, metal foil or polymer substrates to reduce cost. The polycrystalline thin film structures utilize direct-gap semiconductors for high absorption while amorphous Si capitalizes on the disorder to enhance absorption and hydrogen to passivate dangling bonds. It is quite amazing that these very defective thin-film materials can still yield high carrier collection efficiencies. Partly this comes from the field-assisted collection and partly from clever passivation of defects and manipulation of grain boundaries. In some cases we are just lucky that nature provides benign or even helpful grain boundaries in materials such as CdTe and Cu(InGa)Se2, although we seem not so fortunate with GaAs grain boundaries. It is now commonly accepted that, not only are grain boundaries effectively passivated during the thin-film growth process or by post-growth treatments, but also that grain boundaries can actually serve as collection channels for carriers. In fact it is not uncommon to find that the polycrystalline thin-film devices outperform their single-crystal counterpart.

In the past two decades there has been remarkable progress in the performance of small, laboratory cells. The common Si device has benefited from light trapping techniques, back-surface fields, and innovative contact designs, III-V multijunctions from high quality epitaxial techniques, a-Si:H from thin, multiple-junction designs that minimize the effects of dangling bonds (Stabler-Wronski effect), and polycrystalline thin films from innovations in low-cost growth methods and post-growth treatments.

However, these type of cells can’t be used for commercial production of electricity since such III-V hetrostructures are not cost efficient. Now as research paper proposed, silicon nanowires could be direct replacement of these exotic structures and using the same confinement principles(quantum dots) it may prove cost efficient as well. Well nanowires are direct bandgap material and we can alter the bandgap anywhere between 2-5volts and thus, providing wavelength selectivity between 200 to 600 nm( E=hc/λ).

What research paper considered is that nanowires oriented along [100] plane, in a square array with equilibrium lattice relaxed to a unit cell with Si-Si bond spacing to be 2.324A and Si-H spacing to be 1.5A. Since the nanowires are direct band gap semiconductors which make them excellent for optical absorptions. All the nanowires examined in this paper show features in the absorption spectrum the correspond to excitonic processes. The lowest excitonic peaks for each of the nanowires occur at 5.25 eV, (232 nm), 3.7 eV (335 nm), and 2.3 eV (539 nm) in the increasing order of size. The corresponding optical wavelength is shown in table adapted from paper. Absorption is tunable from the visible region to the near UV portion of the solar spectrum.

[Image Details: Click on the thumbnail to see the full sized image]

No doubt, such silicon nano wires are excellent and could decrease solar cell costs efficiently. See the research analysis.

These laboratory successes are now being translated into success in the manufacturing of large PV panel sizes in large quantities and with high yield. The worldwide PV market has been growing at 30% to 40% annually for the past five years due partly to market incentives in Japan and Germany, and more recently in some U.S. states.

Recent analyses of the energy payback time for solar systems show that today’s systems pay back the energy used in manufacturing in about 3.5 years for silicon and 2.5 years for thin films.

The decline of manufacturing costs follows nicely an 80% experience curve-costs dropping 20% for each doubling of cumulative production.The newly updated PV Roadmap ( envisions PV electricity costing $0.06/kW-hr by 2015 and half of new U.S. electricity generation to be produced by PV by 2025, if some modest and temporary nationwide incentives are enacted. Given the rate of progress in the laboratory and innovations on the production line, this ambitious goal might just be achievable. PV can then begin to play a significant role in reducing greenhouse gas emissions and in improving energy security. Some analysts see PV as the only energy resource that has the potential to supply enough clean power for a sustainable energy future for the world.

[Ref: APS and Optical Absorption Characteristics of Silicon Nanowires for Photovoltaic Applications by Vidur Prakash]

Video: Extradimensions and Origin Of Mass

Today I’m not going to publish a long and detailed article for you because there are reasons which are working behind this. Today I’m suffering from sickness, so I can’t publish detailed article. BTW, you can enjoy a video which is shadowing light on extradimensions and origin of mass..

and if this is not enough for you you can try the sidebar and archives or you can select you favourite category.

%d bloggers like this: