Earth-like Planets are Common in Universe!!

Planet Earth is not so special after all; there’s one orbiting roughly every one in four Sun-like stars, according to a five-year astronomy study. The study, published in the journal Science, used Hawaii’s twin 10-metre Keck telescopes to scan 166 sun-like stars within 80 light years, or about 757 trillion kilometres. The team spotted 22 planets around 33 of these stars by the gravitational tug of the planets – called the Doppler or radial velocity method.

23 Earths for every 100 Suns

Of about 100 typical Sun-like stars, one or two have planets the size of Jupiter, roughly six have a planet the size of Neptune,and about 12 have super-Earths between three and 10 Earth masses. How tough the search for habitable worlds would be wasn’t at all clear when NASA gave the Kepler team the go-ahead almost 10 years ago. Only huge, scorching-hot exoplanets larger than Jupiter had been found by then. Limitations in the technique mean astronomers can’t yet see planets up to three times Jupiter’s mass orbiting within one quarter of the distance of the Earth to the Sun (1 AU or almost 150 million kilometres), or smaller Earth-like planets much close in.

Image: The data, depicted here in this illustrated bar chart, show a clear trend. Small planets outnumber larger ones. Astronomers extrapolated from these data to estimate the frequency of the Earth-size planets — nearly one in four sun-like stars, or 23 percent, are thought to host Earth-size planets orbiting close in. Each bar on this chart represents a different group of planets, divided according to their masses. In each of the three highest-mass groups, with masses comparable to Saturn and Jupiter, the frequency of planets around sun-like stars was found to be 1.6 percent. For intermediate-mass planets, with 10 to 30 times the mass of Earth, or roughly the size of Neptune and Uranus, the frequency is 6.5 percent. And the super-Earths, weighing in at only three to 10 times the mass of Earth, had a frequency of 11.8 percent. NASA/JPL-Caltech/UC Berkeley[via: Centauri Dreams]

One of astronomy’s goals is to find eta-Earth (η-Earth), the fraction of Sun-like stars that have an Earth. This is a first estimate, and the real number could be one in eight instead of one in four. But it’s not one in 100, which is glorious news. What this means is that, as NASA develops new techniques over the next decade to find truly Earth-size planets, it won’t have to look too far. Greenhill’s expertise is in the ‘microlensing technique’, which looks at the bending of light of a source star by an intervening planet-star system and is particularly suitable for finding small-mass planets. He estimates between 32% and 100% of stars have planets two to 10 times the size of Earth in orbits ranging from 1¬-10 AU. That’s one heck of a lot of Earth-mass planets. I don’t care how small the probability of life is; some of them are bound to have water on them and will probably have life there.

Remark: It simply suggests that there could be more and more Earth like planet irrespective to our probablistic estimations. In a previous article , I’ve presented a detailed information about temporal temperature zones and wind maps, which supports my earlier speculation of planet being habitable . In that article you can see that wind flow maps of Gliese 581g are similar to wind map of planet Earth at various locations. All of these calculations and evidences dismay the idea of planet being inhabitable. Now this study presents even more optimistic speculation as to how common Earth like planets are. Rare Earth hypothesis goes to hell.

[Source: Cosmos Magazine]
[Note: In original article they referred 80million light years as ~757 trillion kilometers. There should be instead 80 light years.]

Advertisements

This is How Fake Stories Are Made?

Do you know how most of the news stories are made? I’m not so sure but your answer would more likely be that authors do great research/discern through research papers, with a keen observing brain. It’s frustrating for me, however that some really credible news sources like space.com are perverted and as for DailyMail, it’s nothing like new. They already have a history of publishing sucking misinformation or deadbeats.

If you ever wish to read this story by Danis Chow of space.com, I warn you to be very careful. They have just published the delusive news from DailyMail with exponentiating mystery and misinformation. In my prior post, I’ve already noted that. IO9 is even worse. See this article:

Officially known as Gliese 581g, we’ve dubbed the first colony on this newly-discovered planet Gloaming, a word that means “twilight.” Because the planet is tidally locked to its star, only one side sees sunlight while the other is in constant darkness. The sunny side would be incredibly hot, while the dark side would befrozen – but astronomers estimate temperatures would be cold but livable at the border between. Colonies would be built in the gloaming, where light and dark meet. The planet is also in orbit around a red dwarf star, whose light would be redder and much cooler than light from our yellow sun. Colonists living in Gloaming would be warmed by a sun that appeared much bigger in the sky than our own.
UPDATE ABOUT THE NAME GLOAMING:
I just spoke by phone with Steve Vogt, the astrophysicst at UC Santa Cruz who led the team that discovered this planet. Though hewas fine with us calling the planet Gloaming, he said he preferred the name Zarmina (he added that the planet was “too pretty” to be called Gliese 581g). So I’ve decided that our future planetary colony should be called Gloaming, but in deference to his wishes the planet is going to be Zarmina from here on out.

Well, what’s the information you have from this article? Is there any depiction that is unknown to you yet? I doesn’t seems to me at the least. Just follow the steps below and you have a great story for curious readers:

1. Select a impressive title.

2. Introduce your article’s title in 50words. Put some tidbits that makes it interesting and beat around the bush.

This is exactly how most of the fictional stories are made.

ARTEMIS: A New Hope

In August 1960, NASA launched its first communications satellite, Echo 1. Fifty years later, NASA has achieved another first by placing the ARTEMIS-P1 spacecraft into a unique orbit behind the moon, but not actually orbiting the moon itself. This type of orbit, called an Earth-Moon libration orbit, relies on a precise balancing of the Sun, Earth, and Moon gravity so that a spacecraft can orbit about a virtual location rather than about a planet or moon. The diagrams below show the full ARTEMIS-P1 orbit as it flies in proximity to the moon.

Illustration of Artemis-P1 liberations orbits. Credit: NASA/Goddard

ARTEMIS-P1 is the first spacecraft to navigate to and perform stationkeeping operations around the Earth-Moon L1 and L2 Lagrangian points. There are five Lagrangian points associated with the Earth-Moon system. The two points nearest the moon are of great interest for lunar exploration. These points are called L1 (located between the Earth and Moon) and L2 (located on the far side of the Moon from Earth), each about 61,300 km (38,100 miles) above the lunar surface. It takes about 14 to 15 days to complete one revolution about either the L1 or L2 point. These distinctive kidney-shaped orbits are dynamically unstable and require weekly monitoring from ground personnel. Orbit corrections to maintain stability are regularly performed using onboard thrusters.

After the ARTEMIS-P1 spacecraft has completed its first four revolutions in the L2 orbit, the ARTEMIS-P2 spacecraft will enter the L1 orbit. The two sister spacecraft will take magnetospheric observations from opposite sides of the moon for three months, then ARTEMIS-P1 will move to the L1 side where they will both remain in orbit for an additional three months. Flying the two spacecraft on opposite sides, then the same side, of the moon provides for collection of new science data in the Sun-Earth-Moon environment. ARTEMIS will use simultaneous measurements of particles and electric and magnetic fields from two locations to provide the first three-dimensional perspective of how energetic particle acceleration occurs near the Moon’s orbit, in the distant magnetosphere, and in the solar wind. ARTEMIS will also collect unprecedented observations of the space environment behind the dark side of the Moon – the greatest known vacuum in the solar system – by the solar wind. In late March 2011, both spacecraft will be maneuvered into elliptical lunar orbits where they will continue to observe magnetospheric dynamics, solar wind and the space environment over the course of several years.

Illustration of Artemis-P1 liberations orbits, side or ecliptic view. Credit: NASA/Goddard

ARTEMIS stands for “Acceleration, Reconnection, Turbulence and Electrodynamics of the Moon’s Interaction with the Sun”. The ARTEMIS mission uses two of the five in-orbit spacecraft from another NASA Heliophysics constellation of satellites (THEMIS) that were launched in 2007 and successfully completed their mission earlier this year. The ARTEMIS mission allowed NASA to repurpose two in-orbit spacecraft to extend their useful science mission, saving tens of millions of taxpayer dollars instead of building and launching new spacecraft. Other benefits of this first ever libration orbit mission include the investigation of lunar regions to provide a staging location for both assembly of telescopes or human exploration of planets and asteroids or even to serve as a communication relay location for a future lunar outpost. The navigation and control of the spacecraft will also provide NASA engineers with important information on propellant usage, requirements on ground station resources, and the sensitivity of controlling these unique orbits.

[Source: NASA/Goddard]

Chronology Protection Conjecture: Where are They?

“Where are they?” It becomes necessary to ponder about time travellers if you are going to discuss time travel. Well, if time travel is possible, where are the time travellers from future? What if a time machine is handled over you? Where would you like to go first in time dimension? Of course, your answer would be ‘in the past’!! As you know quantum physics allows time travel with given circumstances.[See, Time Travel]

I have already peer reviewed a lot of research paper in regards to time travel. Consider, if our civilization might have been survived for next six centuries and fore, we would have reached to the stage of post singularity and tending to become a typeII civilization. Since every time travelling trick acquire highly risky and pesky technologies like traversable wormholes, tipler cylinder, closed time like curve,macroscopic casimir effect(filled with high negative energy densities) so it is fairly reasonable to assume that building a time machine is just a engineering problem. I bet if we have ensured our survival to the next ten centuries, we would have all such technologies which are today viewed as the challenge. So I’ll assume that our future descendents have such technology which would allow them to travel back and forth in time. Now, a obvious question which knock down our brain is-“where are they?”
Well explanation can be offered in two ways:
1.Chronology Protection Conjecture(CPC):This was first proposed by Pro. Hawking in 1992. Well CPC is nothing but a metaphorical device which prevents the macroscopic time travel. The idea of chronology protection gained considerable currency during the 1990’s when it became clear that traversable wormholes,which are not too objectionable in their own right seem almost generically to lead to the creation of time machines. Why CPC is a key issue? In Newtonian physics, and even in special relativity or flat-space quantum field theory, notions of chronology and causality are so fundamental that they are simply built into the theory ab initio. Violation of normal chronology (for instance, an eect preceding its cause)is so objectionable an occurrence that any such theory would immediately be rejected as unphysical. Unfortunately, in general relativity one can’t simply assert that chronology is preserved, and causality respected, without doing considerable ad-ditional work. The essence of the problem lies in the fact that the Einstein equations of general relativity are local equations, relating some aspects of the spacetime curvature at a point to the presence of stress-energy at that point. Additionally, one also has local chronology protection, inherited from the fact that the spacetime is locally Minkowskian(the Einstein Equivalence Principle), and so in the small general relativity respects all of the causality constraints of special relativity. There is quantum like thingy known as Cauchy horizon problem which prevent the formation of closed time like curve but that could be easily overrun by Casimir effect(exception of Cauchy horizon problem). One side of the horizon contains closed space-like geodesics and the other side contains closed time-like geodesics. When waves traveling in Misner space pass through the identification world line. As this happens infinitely many times while approaching the horizon, the stress-energy tensor diverges at the horizon. Presumably, this prevents space time from developing closed time-like curve ts that would otherwise be feasible. Thus, CPC is protected.
2.Multiverse Theory: This could also be proposed as a possible strategy to explain the paradox. There are infinite universes having every single possibility. So, it is possible that indeed, time travellers from future might be lurking into the past enjoying dinosaur riding but in a different parallel universe. Thusly, it resolves the paradox.

3.No Time Machine can be engineered: Probably this is the simplest solution to the paradox. The parody with time travel is that it always involves something that is way beyond our technology or even rhetorical ideas. General requirements of a time machine are tipler cylinders, black holes, warp drives, wormholes, Kerr Neumann black holes, BPS black holes etc. which won’t seem feasible by any means of technology. Perhaps, which is why there are no time traveller intruding into the past to get better resources, appaling Homo Sapiens.

Understanding the New View of Tectonic Plates

Tectonic plates of Earth has ever been a mystery as to how it works. But now this mystery seems to be resolved. Scientists at Caltech have developed new computer algorithms that for the first time allow for the simultaneous modeling of the earth’s mantle flow, large-scale tectonic plate motions, and the behavior of individual fault zones, to produce an unprecedented view of plate tectonics and the forces that drive it. A paper describing the whole earth model and its underlying algorithms was published in the August 27 issue of the journal Science and also featured on the cover.The work illustrates the interplay between making important advances in science and pushing the envelope of computational science. To create the new model, computational scientists at Texas’s Institute for Computational Engineering and Sciences (ICES) pushed the envelope of a computational technique known as Adaptive Mesh Refinement (AMR).

Partial differential equations such as those describing mantle flow are solved by subdividing the region of interest (such as the mantle) into a computational grid. Ordinarily, the resolution is kept the same throughout the grid. However, many problems feature small-scale dynamics that are found only in limited regions. AMR methods adaptively create finer resolution only where it’s needed. This leads to huge reductions in the number of grid points, making possible simulations that were previously out of reach. The complexity of managing adaptivity among thousands of processors, however, has meant that current AMR algorithms have not scaled well on modern petascale supercomputers. Petascale computers are capable of one million billion operations per second. To overcome this long-standing problem, the group developed new algorithms that allows for adaptivity in a way that scales to the hundreds of thousands of processor cores of the largest supercomputers available today.

[Image Details:Tectonic plate motion (arrows) and viscosity arising from global mantle flow simulation. Plate boundaries, which can be seen as narrow red lines are resolved using an adaptively refined mesh with 1km local resolution. Shown are the Pacific and the Australian tectonic plates and the New Hebrides and Tonga microplates.]

With the new algorithms, the scientists were able to simulate global mantle flow and how it manifests as plate tectonics and the motion of individual faults. The AMR algorithms reduced the size of the simulations by a factor of 5,000, permitting them to fit on fewer than 10,000 processors and run overnight on the Ranger supercomputer at the National Science Foundation. A key to the model was the incorporation of data on a multitude of scales. Many natural processes display a multitude of phenomena on a wide range of scales, from small to large. For example, at the largest scale—that of the whole earth—the movement of the surface tectonic plates is a manifestation of a giant heat engine, driven by the convection of the mantle below. The boundaries between the plates, however, are composed of many hundreds to thousands of individual faults, which together constitute active fault zones. Gurnish said:

The individual fault zones play a critical role in how the whole planet works and if you can’t simulate the fault zones, you can’t simulate plate movement—and, in turn, you can’t simulate the dynamics of the whole planet.

In the new model, the researchers were able to resolve the largest fault zones, creating a mesh with a resolution of about one kilometer near the plate boundaries. Included in the simulation were seismological data as well as data pertaining to the temperature of the rocks, their density, and their viscosity—or how strong or weak the rocks are, which affects how easily they deform. That deformation is nonlinear—with simple changes producing unexpected and complex effects.
Normally, when you hit a baseball with a bat, the properties of the bat don’t change—it won’t turn to Silly Putty. In the earth, the properties do change, which creates an exciting computational problem. If the system is too nonlinear, the earth becomes too mushy; if it’s not nonlinear enough, plates won’t move. We need to hit the ‘sweet spot.

[Image Details:Cross section showing the adaptively refined mesh with a finest resolution of about 1km in the region from the New Hebrides to Tonga in the SW Pacific. The refinement occurs both around plate plate boundaries and dynamically in response to the nonlinear rheology.]
One surprising result from the model relates to the energy released from plates in earthquake zones. Much of the energy dissipation occurs in the earth’s deep interior. We never saw this when we looked on smaller scales.

Poor Hawking You Are Still Wrong!!

Hawking is getting more and more weirder. This time a genius weirdo is explaining the origin of universe alone with his physical laws. In his latest book, The Grand Design, an extract of which is published in Eureka magazine in The Times, Hawking said: “Because there is a law such as gravity, the Universe can and will create itself from nothing. Spontaneous creation is the reason there is something rather than nothing, why the Universe exists, why we exist.” He added: “It is not necessary to invoke God to light the blue touch paper and set the Universe going.”

Okay, admitted! Admitted that Hawking is right then he need to explain everything around us as to why it exist at all? Hawking has already made a fuss of controversial statements.

This is the big bang which started from a explosion and symmetry breaking in ultimate supersymmetric singularity. It seems fairly likely that there was a Big Bang. The obvious question that could be asked to challenge or define the boundaries between physics and metaphysics is: what came before the Big Bang? Physicists define the boundaries of physics by trying to describe them theoretically and then testing that description against observation. Our observed expanding Universe is very well described by flat space, with critical density supplied mainly by dark matter and a cosmological constant, that should expand forever. If we follow this model backwards in time to when the Universe was very hot and dense, and dominated by radiation, then we have to understand the particle physics that happens at such high densities of energy. The experimental understanding of particle physics starts to poop out after the energy scale of electroweak unification, and theoretical physicists have to reach for models of particle physics beyond the Standard Model, to Grand Unified Theories, supersymmetry, string theory and quantum cosmology.

Theories can explain the characteristics but not ultimate fundamental origin. Even if we would have discovered the grand unified theory or say theory of everything, a fundamental question would still be unexplained for us perplexing our 1600cc mind for infinite light years in time dimension that is, from where such singularity came? Who created the space time itself? What is that upholding the whole quantum chunk of space time? Indeed, what is withing our scope that we could know how this universe was formed, what are the laws of universe, how to create extra dimensions of our own or even universe.!?

Consider a scenario to comprehend the controversial case with better intellect! What if we have created five dimensional universe of our own followed by programming laws of universe and intensionally creating a ultimate singularity made of compactified 10^78 electrons,protons and neutrons? Assume somehow we have completed this task by hook or crook. Since we already have TOE( Theory of Everything) so it is desired that we would explode it using our technology inasmuch it would be stupidity to wait for billions of years to see a big bang explosion happening(or we may accelerate the rate of change of time by vast amounts). Now consider the evolution of universe in a same way(it would be slightly different since this time we are dealing with a five dimensional universe not classical 4dimensional universe) as it has already happened. It is presumed that there are intelligent beings which have been evolved along the pace of time. Now if these five dimensional intelligent intellectual entities are smart enough to comprehend their five dimensional universe. Now consider they have discovered quantum theory of origin of universe and physical laws of extradimensions and parallel universes.
What if another five dimensional Hawking( this is not four dimensional Hawking) has acclaimed that we don’t need god since we could explain the origin of our five dimensional universe with the help of supergravity? Is he correct? Certainly not!!

Something can be explained without the need of anything doesn’t mean that anything don’t exist. This is where Hawking still stands for totally vague and satired leaning argument. Hey Hawking, stop this unnecessery controversies!!

Short Article: Laser Weapons Won’t be Good for Space War

There is a good technical mistake, often committed whenever we are going to pioneer space war. In various games like Star Cruisers, it is vigourasly shown that to knock your opponent down what you have to do is, just beam your laser weapon to metallic spaceship and all the stuff is over. Is is drastically and technically correct? With all this frightfulness flying at your ship, you’d want some kind of defense, besides just hoping they’ll miss. As mentioned before, advances in effectiveness of weapon lethality and defensive protection are mainly focused on the targeting problem. That is, the weapons are generally already powerful enough for a one-hit kill. So the room for improvement lies in increasing the probability that the weapon actually hits the target. And the the probability that the weapon actually hits the target. And the room for improvement on the defensive side is to decrease the probability of a hit. Weapons can be improved two ways: increase the precision of each shot(precision of fire), or keep the same precision but increase the number of shots fired(volume of fire). Precision of fire is governed by
[a]the location of the target when the weapons fire arrives,
[b]the flight path of the weapons fire given characteristic of the shot and the environment though which the shot passes, and
[c]the weapon’s aiming precision. Volume of fire is governed by
[d]the weapon’s rate of fire and
[e]the lethality of a given shot.

A defense can interfere with the[a] location of the targetby evasive maneuvers. There isn’t really a way to interfere with [b] the characteristics of a shot, short of inserting a saboteur into the crew of the firing ship. A defense can interfere with the environment through which the shot passes by such things as jamming the weapon’s homing frequencies or clouds of anti-laser sand(which may work in the Traveller universe, but not in reality). There isn’t really a way to directly interfere with [c] the weapon’s aiming precision(again short of a saboteur), though one can indirectly do so by decreasing the target’s by decreasing the target’s signature, increasing the range or jamming the firing ship’s targeting sensors and degrade their targeting solution. Finally, while one cannot do much about the [d] weapon’s rate of fire, the [e] lethality of a given shotcan be effectively reduced by rendering harmless shots that actually hit. This is done by armor, point defense, and science-fictional force fields.

But our weapons are really good? Generally, laser weapons or beam weapons like high energy electron beam weapons are suggested as best proponents for space war. Kinetic weapons are not good for space war. Why? Consider a spaceship with a rest mass of say 20 tons and moving at a speed of twenty miles/second. Now a missile having mass two or three tons is fired say at a thirty miles/seconds. Now apply the law of conservation of momentum and tell me by which amount your spaceship is get deflected(ignore the special relativity since velocity is not considerable against c)? It is obvious that ship would get unbalanced and it won’t be able to fire the weapons continuously.

Like a beam of high velocity electrons, a laser beam is also capable of producing very high power density. Perhaps which is why science fiction finds it excellent especially in space war. I also find it excellent against kinetic weapons since it is precise and accurate in targetting the opponent and one hit kill as it would travel at a speed of c which is very high compared to kinetic weapon’s speed. It also diminish the inertial effects as which were very large in the case of kinetic weapons. How effective laser weapons are, it would depend upon following factors:
1. Interaction of laser beam with spaceship’s material(here I’ll assume it as to be metal in order to get better stealth)
2. Heat conduction and temperature rise
3. Melting, vaporization, and ablation.

Destroying power of laser weapons depend on the thermo-optic interaction between spaceship’s outer metal(or say material) and the beam. So, it’s obvious that the crew surface should reflect back too much to get better stealth or use ceramic material with very high temperature resistance. So what if opponent ship has a relatively thick layer of ceramics( as most of space shuttles do have)? The absorbed light propagates into the medium and its energy is gradually transferred to the lattice atoms in the form of heat. It follows the Lambert’s law here which is (Intensity at a depth)/(initial intensity)= exp(absorption cofficient*depth)
it is pretty clear from the above law that most of the laser’s power would be consumed just to vaporize a very thin outer layer(0.01micrometer) which won’t be effective much against the crew. Not only that laser’s efficiency is very low and providing a nearly hundred percent reflective armor would be excellent against laser weapons. I’ll make it more clear in upcoming articles.

%d bloggers like this: