Should We Terraform Mars? A Debate

This is a part of a debate organised by NASA. Science Fiction Meets Science Fact. ‘What are the real possibilities, as well as the potential ramifications, of transforming Mars?’ Terraform debaters left to right, Greg Bear , author of such books as “Moving Mars” and “Darwin’s Radio.”; David Grinspoon , planetary scientist at the Southwest Research Institute; James Kasting , geoscientist at Pennsylvania State University; Christopher McKay , planetary scientist at NASA Ames Research Center.; Lisa Pratt , biogeochemist at Indiana University; Kim Stanley Robinson , author of the “Mars Trilogy” (“Red Mars,” “Green Mars” and “Blue Mars“); John Rummel , planetary protection officer for NASA; moderator Donna Shirley , former manager of NASA’s Mars Exploration Program at the Jet Propulsion Laboratory.

Donna Shirley: Greg, what are the ethics of exploring Mars?

Greg Bear: You usually talk about ethics within your own social group. And if you define someone as being outside your social group, they’re also outside your ethical system, and that’s what’s caused so much trauma, as we seem to be unable to recognize people who look an awful lot like us as being human beings.

When we go to Mars, we’re actually dealing with a problem that’s outside the realm of ethics and more in the realm of enlightened self-interest. We have a number of reasons for preserving Mars as it is. If there’s life there, it’s evolved over the last several billion years, it’s got incredible solutions to incredible problems. If we just go there and willy-nilly ramp it up or tamp it down or try to remold it somehow, we’re going to lose that information. So that’s not to our best interest.

We were talking earlier about having a pharmaceutical expedition to Mars, not just that but a chemical expedition to Mars, people coming and looking for solutions to incredible problems that could occur here on Earth and finding them on Mars. That could generate income unforeseen.

If we talk about ethical issues on a larger scale of how are other beings in the universe going to regard how we treat Mars, that’s a question for Arthur C. Clarke to answer, I think. That’s been more his purview: the large, sometimes sympathetic eye staring at us and judging what we do.

We really have to look within our own goals and our own heart here. And that means we have to stick within our social group, which at this point includes the entire planet. If we decide that Mars is, in a sense, a fellow being, that the life on Mars, if we discover them – and I think that we will discover that Mars is alive – is worthy of protection, then we have to deal with our own variations in ethical judgment.

earth
“I’ve heard a lot of people say, ‘Why should we go to Mars, because look at what human beings have done to Earth.'” -David Grinspoon
Image Credit: NASA

The question is, if it’s an economic reality that Mars is extraordinarily valuable, will we do what we did in North America and Africa and South America and just go there and wreak havoc? And we have to control our baser interests, which is, as many of us have found out recently, very hard to do in this country. So we have a lot of problems to deal with here, internal problems. Because not everyone will agree on an ethical decision and that’s the real problem with making ethical decisions.

Donna Shirley: David, you want to comment on the ethics of terraforming Mars?


David Grinspoon:
Well, one comment I’ve heard about recently, partly in response to the fact that the president has recently proposed new human missions to Mars – of course, that’s not terraforming, but it is human activities on Mars – and I’ve heard a lot of people say, “Why should we go to Mars, because look at what human beings have done to Earth. Look at how badly we’re screwing it up. Look at the human role on Earth. Why should we take our presence and go screw up other places?”

It’s an interesting question, and it causes me to think about the ethics of the human role elsewhere. What are we doing in the solar system, what should we be doing? But, it’s very hard for me to give up on the idea. Maybe because I read too much science fiction when I was a kid, I do have, I have to admit, this utopian view of a long-term human future in space. I think that if we find life on Mars, the ethical question’s going to be much more complicated.

But in my view, I think we’re going to find that Mars does not have life. We may have fossils there. I think it’s the best place in the solar system to find fossils. Of course, I could be wrong about this and I’d love to be wrong about it, and that’s why we need to explore. If the methane observation is borne out, it would be, to me, the first sign that I really have to rethink this, that maybe there is something living there under the ice.

mars_ice
“If the methane observation is borne out, maybe there is something living there under the ice.-David Grinspoon
Image Credit: NASA

But let’s assume for a second that Mars really is dead, and we’ve explored Mars very carefully – and this is not a determination we’ll be able to make without a lot more exploration – but assuming it was, then what about this question. Should human beings go to Mars, because do we deserve to, given what we’ve done to Earth? And to me, the analogy is of a vacant lot versus planting a garden. If Mars is really dead, then to me it’s like a vacant lot, where we have the opportunity to plant a garden. I think, in the long run, that we should.

We’ve heard a lot different possible motivations, economic motivations, or curiosity, but I think ultimately the motivation should be out of love for life, and wanting there to be more life where there’s only death and desolation. And so I think that ethically, in the long run, if we really learn enough to say that Mars is dead, then the ethical imperative is to spread life and bring a dead world to life.

Donna Shirley: Jim, we can’t prove a negative, so how do we know if there’s life or not, if we keep looking and looking and looking. How long should we look? How would we make that decision?

James Kasting: I think Lisa put us on the right track initially. She’s studying subsurface life on Earth. If there’s life on Mars today, it’s subsurface. I think it’s deep subsurface, a kilometer or two down. So I think we do need humans on Mars, because we need them up there building big drilling rigs to drill down kilometers depth and do the type of exploration that Lisa and her group is doing on Earth here. I think that’s going to take not just decades, but probably a couple of centuries before we can really get a good feel for that.

lake_vostok
Lake Vostok.
Image Credit: NASA

Donna Shirley: Well, I know, John, at Lake Vostok, one of the big issues is, if we drill into it, our dirty drilling rigs are going to contaminate whatever’s down there. So how do we drill without worrying about contaminating something if it is there?

John Rummel: Well, you accept a little contamination probabilistically that you can allow operations and still try to prevent it. I mean, basically what we can do is try to prevent that which we don’t want to have happen. We can’t ever have a guarantee. The easiest way to prevent the contamination of Mars is to stay here in this room. Or someplace close by.

Greg Bear: That’s known as abstinence.

John Rummel: [laughs]. I also want to point out it’s not necessarily the case that the first thing you want to do on Mars, even if there’s no life, is to change it. We don’t know the advantages of the martian environment. It’s a little bit like the people who go to Arizona for their allergies and start planting crabgrass right off. They wonder why they get that. And it may be that Mars as it is has many benefits. I started working here at NASA Ames as a postdoc with Bob McElroy on controlled ecological life-support systems. There’s a lot we can do with martian environments inside before we move out to the environment of Mars and try to mess with it. So I would highly recommend that not only do we do a thorough job with robotic spacecraft on Mars, but we do a thorough job living inside and trying to figure out what kind of a puzzle Mars presents.

alh_meteorite
The ALH Meteorite.
Image Credit: NASA/ Johnson Space Center

Donna Shirley: Stan, you dealt with this issue in your book with the Reds versus the Greens. What are some of the ethics of making decisions about terraforming Mars?

Kim Stanley Robinson: Ah, the Reds versus the Greens. This is a question in environmental ethics that has been completely obscured by this possibility of life on Mars.

After the Viking mission, and for about a decade or so, up to the findings of the ALH meteorite, where suddenly martian bacteria were postulated again, we thought of Mars as being a dead rock. And yet there were still people who were very offended at the idea of us going there and changing it, even though it was nothing but rock. So this was an interesting kind of limit case in environmental ethics, because this sense of what has standing. People of a certain class had standing, then all the people had standing, then the higher mammals had standing – in each case it’s sort of an evolutionary process where, in an ethical sense, more and more parts of life had standing, and need consideration and ethical treatment from us. They aren’t just there to be used.

When you get to rock, it seemed to me that there would be very few people (wanting to preserve it). And yet, when I talked about my project, when I was writing it, it was an instinctive thing, that Mars has its own, what environment ethicists would call, “intrinsic worth,” even as a rock. It’s a pretty interesting position. And I had some sympathy for it, because I like rocky places myself. If somebody proposed irrigating and putting forests in Death Valley, I would think of this as a travesty. I have many favorite rockscapes, and a lot of people do.

So, back and forth between Red and Green, and one of the reasons I think that my book was so long was that it was just possible to imagine both sides of this argument for a very long time. And I never really did reconcile it in my own mind except that it seemed to me that Mars offered the solution itself. If you think of Mars as a dead rock and you think it has intrinsic worth, it should not be changed, then you look at the vertical scale of Mars and you think about terraforming, and there’s a 31-kilometer difference between the highest points on Mars and the lowest. I reckoned about 30 percent of the martian surface would stay well above an atmosphere that people could live in, in the lower elevations. So maybe you could have it both ways. I go back and forth on this teeter-totter. But of course now it’s a kind of an older teeter-totter because we have a different problem now.

Links: Colonization of mars[Are We Going To Colonize Mars?]

Advertisements

Why To Colonize Mars?

Our blue planet is suffering from cataclysmic processes including global warming and pollution. Now our Earth is harboring almost seven billion Homo Sapience. In near future it would be even greater problem. Except that there are many problems which force us to establish colonies outside of terran. Here Robert Zubrin, Former chairman of National Space Society, will explain some prospects for Mars colonization.

By Robert Zubrin

Among extraterrestrial bodies in our solar system, Mars is singular in that it possesses all the raw materials required to support not only life, but a new branch of human civilization. This uniqueness is illustrated most clearly if we contrast Mars with the Earth’s Moon, the most frequently cited alternative location for extraterrestrial human colonization.

In contrast to the Moon, Mars is rich in carbon, nitrogen, hydrogen and oxygen, all in biologically readily accessible forms such as carbon dioxide gas, nitrogen gas, and water ice and permafrost. Carbon, nitrogen, and hydrogen are only present on the Moon in parts per million quantities, much like gold in seawater. Oxygen is abundant on the Moon, but only in tightly bound oxides such as silicon dioxide (SiO2), ferrous oxide (Fe2O3), magnesium oxide (MgO), and aluminum oxide (Al2O3), which require very high energy processes to reduce. Current knowledge indicates that if Mars were smooth and all its ice and permafrost melted into liquid water, the entire planet would be covered with an ocean over 100 meters deep. This contrasts strongly with the Moon, which is so dry that if concrete were found there, Lunar colonists would mine it to get the water out. Thus, if plants could be grown in greenhouses on the Moon (an unlikely proposition, as we’ve seen) most of their biomass material would have to be imported.

The Moon is also deficient in about half the metals of interest to industrial society (copper, for example), as well as many other elements of interest such as sulfur and phosphorus. Mars has every required element in abundance. Moreover, on Mars, as on Earth, hydrologic and volcanic processes have occurred that are likely to have consolidated various elements into local concentrations of high-grade mineral ore. Indeed, the geologic history of Mars has been compared to that of Africa, with very optimistic inferences as to its mineral wealth implied as a corollary. In contrast, the Moon has had virtually no history of water or volcanic action, with the result that it is basically composed of trash rocks with very little differentiation into ores that represent useful concentrations of anything interesting.

You can generate power on either the Moon or Mars with solar panels, and here the advantages of the Moon’s clearer skies and closer proximity to the Sun than Mars roughly balances the disadvantage of large energy storage requirements created by the Moon’s 28-day light-dark cycle. But if you wish to manufacture solar panels, so as to create a self-expanding power base, Mars holds an enormous advantage, as only Mars possesses the large supplies of carbon and hydrogen needed to produce the pure silicon required for producing photovoltaic panels and other electronics. In addition, Mars has the potential for wind-generated power while the Moon clearly does not. But both solar and wind offer relatively modest power potential — tens or at most hundreds of kilowatts here or there. To create a vibrant civilization you need a richer power base, and this Mars has both in the short and medium term in the form of its geothermal power resources, which offer potential for large numbers of locally created electricity generating stations in the 10 MW (10,000 kilowatt) class. In the long-term, Mars will enjoy a power-rich economy based upon exploitation of its large domestic resources of deuterium fuel for fusion reactors. Deuterium is five times more common on Mars than it is on Earth, and tens of thousands of times more common on Mars than on the Moon.

But the biggest problem with the Moon, as with all other airless planetary bodies and proposed artificial free-space colonies, is that sunlight is not available in a form useful for growing crops. A single acre of plants on Earth requires four megawatts of sunlight power, a square kilometer needs 1,000 MW. The entire world put together does not produce enough electrical power to illuminate the farms of the state of Rhode Island, that agricultural giant. Growing crops with electrically generated light is just economically hopeless. But you can’t use natural sunlight on the Moon or any other airless body in space unless you put walls on the greenhouse thick enough to shield out solar flares, a requirement that enormously increases the expense of creating cropland. Even if you did that, it wouldn’t do you any good on the Moon, because plants won’t grow in a light/dark cycle lasting 28 days.

But on Mars there is an atmosphere thick enough to protect crops grown on the surface from solar flare. Therefore, thin-walled inflatable plastic greenhouses protected by unpressurized UV-resistant hard-plastic shield domes can be used to rapidly create cropland on the surface. Even without the problems of solar flares and month-long diurnal cycle, such simple greenhouses would be impractical on the Moon as they would create unbearably high temperatures. On Mars, in contrast, the strong greenhouse effect created by such domes would be precisely what is necessary to produce a temperate climate inside. Such domes up to 50 meters in diameter are light enough to be transported from Earth initially, and later on they can be manufactured on Mars out of indigenous materials. Because all the resources to make plastics exist on Mars, networks of such 50- to 100-meter domes couldbe rapidly manufactured and deployed, opening up large areas of the surface to both shirtsleeve human habitation and agriculture. That’s just the beginning, because it will eventually be possible for humans to substantially thicken Mars’ atmosphere by forcing the regolith to outgas its contents through a deliberate program of artificially induced global warming. Once that has been accomplished, the habitation domes could be virtually any size, as they would not have to sustain a pressure differential between their interior and exterior. In fact, once that has been done, it will be possible to raise specially bred crops outside the domes.

The point to be made is that unlike colonists on any known extraterrestrial body, Martian colonists will be able to live on the surface, not in tunnels, and move about freely and grow crops in the light of day. Mars is a place where humans can live and multiply to large numbers, supporting themselves with products of every description made out of indigenous materials. Mars is thus a place where an actual civilization, not just a mining or scientific outpost, can be developed. And significantly for interplanetary commerce, Mars and Earth are the only two locations in the solar system where humans will be able to grow crops for export.

Interplanetary Commerce

Mars is the best target for colonization in the solar system because it has by far the greatest potential for self-sufficiency. Nevertheless, even with optimistic extrapolation of robotic manufacturing techniques, Mars will not have the division of labor required to make it fully self-sufficient until its population numbers in the millions. Thus, for decades and perhaps longer, it will be necessary, and forever desirable, for Mars to be able to import specialized manufactured goods from Earth. These goods can be fairly limited in mass, as only small portions (by weight) of even very high-tech goods are actually complex. Nevertheless, these smaller sophisticated items will have to be paid for, and the high costs of Earth-launch and interplanetary transport will greatly increase their price. What can Mars possibly export back to Earth in return?

It is this question that has caused many to incorrectly deem Mars colonization intractable, or at least inferior in prospect to the Moon. For example, much has been made of the fact that the Moon has indigenous supplies of helium-3, an isotope not found on Earth and which could be of considerable value as a fuel for second generation thermonuclear fusion reactors. Mars has no known helium-3 resources. On the other hand, because of its complex geologic history, Mars may have concentrated mineral ores, with much greater concentrations of precious metal ores readily available than is currently the case on Earth — because the terrestrial ores have been heavily scavenged by humans for the past 5,000 years. If concentrated supplies of metals of equal or greater value than silver (such as germanium, hafnium, lanthanum, cerium, rhenium, samarium, gallium, gadolinium, gold, palladium, iridium, rubidium, platinum, rhodium, europium, and a host of others) were available on Mars, they could potentially be transported back to Earth for a substantial profit. Reusable Mars-surface based single-stage-to-orbit vehicles would haul cargoes to Mars orbit for transportation to Earth via either cheap expendable chemical stages manufactured on Mars or reusable cycling solar or magnetic sail-powered interplanetary spacecraft. The existence of such Martian precious metal ores, however, is still hypothetical.

But there is one commercial resource that is known to exist ubiquitously on Mars in large amount — deuterium. Deuterium, the heavy isotope of hydrogen, occurs as 166 out of every million hydrogen atoms on Earth, but comprises 833 out of every million hydrogen atoms on Mars. Deuterium is the key fuel not only for both first and second generation fusion reactors, but it is also an essential material needed by the nuclear power industry today. Even with cheap power, deuterium is very expensive; its current market value on Earth is about $10,000 per kilogram, roughly fifty times as valuable as silver or 70% as valuable as gold. This is in today’s pre-fusion economy. Once fusion reactors go into widespread use deuterium prices will increase. All the in-situ chemical processes required to produce the fuel, oxygen, and plastics necessary to run a Mars settlement require water electrolysis as an intermediate step. As a by product of these operations, millions, perhaps billions, of dollars worth of deuterium will be produced.

Ideas may be another possible export for Martian colonists. Just as the labor shortage prevalent in colonial and nineteenth century America drove the creation of “Yankee ingenuity’s” flood of inventions, so the conditions of extreme labor shortage combined with a technological culture that shuns impractical legislative constraints against innovation will tend to drive Martian ingenuity to produce wave after wave of invention in energy production, automation and robotics, biotechnology, and other areas. These inventions, licensed on Earth, could finance Mars even as they revolutionize and advance terrestrial living standards as forcefully as nineteenth century American invention changed Europe and ultimately the rest of the world as well.

Inventions produced as a matter of necessity by a practical intellectual culture stressed by frontier conditions can make Mars rich, but invention and direct export to Earth are not the only ways that Martians will be able to make a fortune. The other route is via trade to the asteroid belt, the band of small, mineral-rich bodies lying between the orbits of Mars and Jupiter. There are about 5,000 asteroids known today, of which about 98% are in the “Main Belt” lying between Mars and Jupiter, with an average distance from the Sun of about 2.7 astronomical units, or AU. (The Earth is 1.0 AU from the Sun.) Of the remaining two percent known as the near-Earth asteroids, about 90% orbit closer to Mars than to the Earth. Collectively, these asteroids represent an enormous stockpile of mineral wealth in the form of platinum group and other valuable metals.

Miners operating among the asteroids will be unable to produce their necessary supplies locally. There will thus be a need to export food and other necessary goods from either Earth or Mars to the Main Belt. Mars has an overwhelming positional advantage as a location from which to conduct such trade.

Historical Analogies

The primary analogy I wish to draw is that Mars is to the new age of exploration as North America was to the last. The Earth’s Moon, close to the metropolitan planet but impoverished in resources, compares to Greenland. Other destinations, such as the Main Belt asteroids, may be rich in potential future exports to Earth but lack the preconditions for the creation of a fully developed indigenous society; these compare to the West Indies. Only Mars has the full set of resources required to develop a native civilization, and only Mars is a viable target for true colonization. Like America in its relationship to Britain and the West Indies, Mars has a positional advantage that will allow it to participate in a useful way to support extractive activities on behalf of Earth in the asteroid belt and elsewhere.

But despite the shortsighted calculations of eighteenth-century European statesmen and financiers, the true value of America never was as a logistical support base for West Indies sugar and spice trade, inland fur trade, or as a potential market for manufactured goods. The true value of America was as the future home for a new branch of human civilization, one that as a combined result of its humanistic antecedents and its frontier conditions was able to develop into the most powerful engine for human progress and economic growth the world had ever seen. The wealth of America was in fact that she could support people, and that the right kind of people chose to go to her. People create wealth. People are wealth and power. Every feature of Frontier American life that acted to create a practical can-do culture of innovating people will apply to Mars a hundred-fold.

Mars is a harsher place than any on Earth. But provided one can survive the regimen, it is the toughest schools that are the best. The Martians shall do well.

Key To Space Time Engineering: Huge Magnetic Field Created

Large magnetic fields above 150T were never produced before but this time scientists has successfully created the magnetic Field of 300T. Larger magnetic fields could be implemented into space time engineering, however this is not up to extent we need yet, provide a key to future space time engineering. Graphene, the extraordinary form of carbon that consists of a single layer of carbon atoms, has produced another in a long list of experimental surprises. In the current issue of the journal Science, a multi-institutional team of researchers headed by Michael Crommie, a faculty senior scientist in the Materials Sciences Division at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, reports the creation of pseudo-magnetic fields far stronger than the strongest magnetic fields ever sustained in a laboratory – just by putting the right kind of strain onto a patch of graphene.

“We have shown experimentally that when graphene is stretched to form nanobubbles on a platinum substrate, electrons behave as if they were subject to magnetic fields in excess of 300 tesla, even though no magnetic field has actually been applied,” says Crommie. “This is a completely new physical effect that has no counterpart in any other condensed matter system.”

Crommie notes that “for over 100 years people have been sticking materials into magnetic fields to see how the electrons behave, but it’s impossible to sustain tremendously strong magnetic fields in a laboratory setting.” The current record is 85 tesla for a field that lasts only thousandths of a second. When stronger fields are created, the magnets blow themselves apart.

The ability to make electrons behave as if they were in magnetic fields of 300 tesla or more – just by stretching graphene – offers a new window on a source of important applications and fundamental scientific discoveries going back over a century. This is made possible by graphene’s electronic behavior, which is unlike any other material’s.

[Image Details: In this scanning tunneling microscopy image of a graphene nanobubble, the hexagonal two-dimensional graphene crystal is seen distorted and stretched along three main axes. The strain creates pseudo-magnetic fields far stronger than any magnetic field ever produced in the laboratory. ]

A carbon atom has four valence electrons; in graphene (and in graphite, a stack of graphene layers), three electrons bond in a plane with their neighbors to form a strong hexagonal pattern, like chicken-wire. The fourth electron sticks up out of the plane and is free to hop from one atom to the next. The latter pi-bond electrons act as if they have no mass at all, like photons. They can move at almost one percent of the speed of light.

The idea that a deformation of graphene might lead to the appearance of a pseudo-magnetic field first arose even before graphene sheets had been isolated, in the context of carbon nanotubes (which are simply rolled-up graphene). In early 2010, theorist Francisco Guinea of the Institute of Materials Science of Madrid and his colleagues developed these ideas and predicted that if graphene could be stretched along its three main crystallographic directions, it would effectively act as though it were placed in a uniform magnetic field. This is because strain changes the bond lengths between atoms and affects the way electrons move between them. The pseudo-magnetic field would reveal itself through its effects on electron orbits.

In classical physics, electrons in a magnetic field travel in circles called cyclotron orbits. These were named following Ernest Lawrence’s invention of the cyclotron, because cyclotrons continuously accelerate charged particles (protons, in Lawrence’s case) in a curving path induced by a strong field.

Viewed quantum mechanically, however, cyclotron orbits become quantized and exhibit discrete energy levels. Called Landau levels, these correspond to energies where constructive interference occurs in an orbiting electron’s quantum wave function. The number of electrons occupying each Landau level depends on the strength of the field – the stronger the field, the more energy spacing between Landau levels, and the denser the electron states become at each level – which is a key feature of the predicted pseudo-magnetic fields in graphene.

A serendipitous discovery

In the patch of graphene inside the roughly circular indentation on a platinum substrate, four triangular nanobubbles appear at the edge of the patch and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows electron densities clustering at discrete Landau levels. Pseudo-magnetic fields are strongest at regions of greatest curvature.

[Image Details: A patch of graphene at the surface of a platinum substrate exhibits four triangular nanobubbles at its edges and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows local electron densities clustering in peaks at discrete Landau-level energies. Pseudo-magnetic fields are strongest at regions of greatest curvature.]

Describing their experimental discovery, Crommie says, “We had the benefit of a remarkable stroke of serendipity.”

Crommie’s research group had been using a scanning tunneling microscope to study graphene monolayers grown on a platinum substrate. A scanning tunneling microscope works by using a sharp needle probe that skims along the surface of a material to measure minute changes in electrical current, revealing the density of electron states at each point in the scan while building an image of the surface.

Crommie was meeting with a visiting theorist from Boston University, Antonio Castro Neto, about a completely different topic when a group member came into his office with the latest data. It showed nanobubbles, little pyramid-like protrusions, in a patch of graphene on the platinum surface and associated with the graphene nanobubbles there were distinct peaks in the density of electron states. Crommie says his visitor, Castro Neto, took one look and said, “That looks like the Landau levels predicted for strained graphene.”

Sure enough, close examination of the triangular bubbles revealed that their chicken-wire lattice had been stretched precisely along the three axes needed to induce the strain orientation that Guinea and his coworkers had predicted would give rise to pseudo-magnetic fields. The greater the curvature of the bubbles, the greater the strain, and the greater the strength of the pseudo-magnetic field. The increased density of electron states revealed by scanning tunneling spectroscopy corresponded to Landau levels, in some cases indicating giant pseudo-magnetic fields of 300 tesla or more.

“Getting the right strain resulted from a combination of factors,” Crommie says. “To grow graphene on the platinum we had exposed the platinum to ethylene” – a simple compound of carbon and hydrogen – “and at high temperature the carbon atoms formed a sheet of graphene whose orientation was determined by the platinum’s lattice structure.”

The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), the colors indicate height, not field strength, but the measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model. To get the highest resolution from the scanning tunneling microscope, the system was then cooled to a few degrees above absolute zero. Both the graphene and the platinum contracted – but the platinum shrank more, with the result that excess graphene pushed up into bubbles, measuring four to 10 nanometers (billionths of a meter) across and from a third to more than two nanometers high. To confirm that the experimental observations were consistent with theoretical predictions, Castro Neto worked with Guinea to model a nanobubble typical of those found by the Crommie group. The resulting theoretical picture was a near-match to what the experimenters had observed: a strain-induced pseudo-magnetic field some 200 to 400 tesla strong in the regions of greatest strain, for nanobubbles of the correct size.

[Image Details: The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), colors indicate height, not field strength, but measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model.]

“Controlling where electrons live and how they move is an essential feature of all electronic devices,” says Crommie. “New types of control allow us to create new devices, and so our demonstration of strain engineering in graphene provides an entirely new way for mechanically controlling electronic structure in graphene. The effect is so strong that we could do it at room temperature.”

The opportunities for basic science with strain engineering are also huge. For example, in strong pseudo-magnetic fields electrons orbit in tight circles that bump up against one another, potentially leading to novel electron-electron interactions. Says Crommie, “this is the kind of physics that physicists love to explore.”

“Strain-induced pseudo-magnetic fields greater than 300 tesla in graphene nanobubbles,” by Niv Levy, Sarah Burke, Kacey Meaker, Melissa Panlasigui, Alex Zettl, Francisco Guinea, Antonio Castro Neto, and Michael Crommie, appears in the July 30 issue of Science. The work was supported by the Department of Energy’s Office of Science and by the Office of Naval Research. I’ve contacted Crommy to provide more details of research. Hope, soon I’ll get a response from him,

[Source: News Center]

New Silicon Nanowires Could Make Photovoltaic Devices More Efficient

Future energy crisis is not a single problem alone, various challenges are before us which haven’t been solved yet. Well, this article will not cover future energy challenges but it would suggest a solution to future energy crisis based on recent research. Our whole energy problem would be solved if  we could optimize solar cells more efficiently. What we need as energy source, is ultimately electricity, no matter how you are getting it. Either get it from nuclear resources, water or solar cells. Well, solar cells could solve our problems more efficiently and solar cells are potential source of clean and renewable energy. Photovoltaic cell[PV Devices] are excellent as our future energy source.

Although early photovoltaic (PV) cells and modules were used in space and other off-grid applications where their value is high, currently about 70% of PV is grid- connected which imposes major cost pressures from conventional sources of electricity. Yet the potential benefits of its large-scale use are enormous and PV now appears to be meeting the challenge with annual growth rates above 30% for the past five years.

More than 90% of PV is currently made of Si modules assembled from small 4-12 inch crystalline or multicrystalline wafers which, like most electronics, can be individually tested before assembly into modules. However, the newer thin-film technologies are monolithically integrated devices approximately 1 m2 in size which cannot have even occasional shunts or weak diodes without ruining the manufacturing yield. Thus, these devices require the deposition of many thin semiconducting layers on glass, stainless steel or polymer and all layers must function well a square meter at a time or the device fails. This is the challenge of PV technology-high efficiency, high uniformity, and high yield over large areas to form devices that can operate with repeated temperature cycles from —40 C to 100 C with a provable twenty- year lifetime and a cost of less than a penny per square centimeter.

[Image Details: Typical construction of solar cell]

Solar cells work and they last. The first cell made at Bell Labs in 1954 is still functioning. Solar cells continue to play a major role in the success of space exploration—witness the Mars rovers. Today’s commercial solar panels, whether of crystalline Si, thin amorphous, or polycrystalline films, are typically guaranteed for 20 years—unheard of reliability for a consumer product. However, PV still accounts for less than 10-5 of total energy usage world-wide. In the US, electricity produced by PV costs upwards of $0.25/ kW-hr whereas the cost of electricity production by coal is less than $0.04 /kW-hr.

It seems fair to ask what limits the performance of solar modules and is there hope of ever reaching cost- competitive generation of PV electricity?

The photogeneration of a tiny amount of current was first observed by Adams and Day in 1877 in selenium. However, it was not until 1954 that Chapin, Fuller and Pearson at Bell Labs obtained significant power generation from a Si cell. Their first cells used a thick lithium-doped n-layer on p-Si, but efficiencies rose well above 5% with a very thin phosphorous-doped n-Si layer at the top.

The traditional Si solar cell is a homojunction device. The sketch of first image indicates the typical construction of the semiconductor part of a Si cell. It might have a p-type base with an acceptor (typically boron or aluminum) doping level of NA = 1 x 1015 cm-3 and a diffused n-type window/emitter layer with ND = 1 x 1020 cm-3 (typically phosphorus). The Fermi level of the n-type (p-type) side will be near the conduction (valence) band edge so that donor-released electrons will diffuse into the p-type side to occupy lower energy states there, until the exposed space charge (ionized donors in the n-type region, and ionized acceptors in the p-type) produces a field large enough to prevent further diffusion. Often a very heavily doped region is used at the back contact to produce a back surface field (BSF) that aids in hole collection and rejects electrons.

In the ideal case of constant doping densities in the two regions, the depletion width, W, is readily calculated from the Poisson equation, and lies mostly in the lightly doped p-type region. The electric field has its maximum at the metallurgical interface between the n- and p-type regions and typically reaches 104 V/cm or more. Such fields are extremely effective at separating the photogenerated electron-hole pairs. Silicon with its indirect gap has relatively weak light absorption requiring about 10-20μm of material to absorb most of the above-band-gap, near-infrared and red light. (Direct-gap materials such as GaAs, CdTe, Cu(InGa)Se2 and even a-Si:H need only ~0.5μm or less for full optical absorption.) The weak absorption in crystalline Si means that a significant fraction of the above-band-gap photons will generate carriers in the neutral region where the minority carrier lifetime must be very long to allow for long diffusion lengths. By contrast, carrier generation in the direct-gap semiconductors can be entirely in the depletion region where collection is through field-assisted drift.

A high quality Si cell might have a hole lifetime in the heavily doped n-type region of tp=1μs and corresponding diffusion length of Lp=12 μm, whereas, in the more lightly doped p-type region the minority carriers might have tn=350 μs and Ln=1100 μm. Often few carriers are collected from the heavily doped, n-type region so strongly-absorbed blue light does not generate much photocurrent. Usually the n-type emitter layer is therefore kept very thin. The long diffusion length of electrons in the p- type region is a consequence of the long electron lifetime due to low doping and of the higher mobility of electrons compared with holes. This is typical of most semiconductors so that the most common solar cell configuration is an “n-on-p” with the p-type semiconductor serving as the “absorber.”

[Image Details: Triple junction cell]

Current generated by an ideal, single-junction solar cell is the integral of the solar spectrum above the semiconductor band gap and the voltage is approximately 2/3 of the band gap. Note that any excess photon energy above the band edge will be lost as the kinetic energy of the electron and hole pair relaxes to thermal motion in picoseconds and simply heats the cell. Thus single-junction solar cell efficiency is limited to about 30% for band gaps near 1.5 eV. Si cells have reached 24%.

This limit can be exceeded by multijunction devices since the high photon energies are absorbed in wider-band-gap component cells. In fact the III-V materials, benefiting like Si from research for electronics have been very successful at achieving very high efficiency, reaching 35% with the monolithically stacked three-junction, two-terminal structure, GaInP/GaAs/Ge. These tandem devices must be critically engineered to have exactly matched current generation in each of the junctions and therefore are sensitive to changes in the solar spectrum during the day. A sketch of a triple-junction, two-terminal amorphous silicon cell is shown in second image.

Polycrystalline and amorphous thin-film cells use inexpensive glass, metal foil or polymer substrates to reduce cost. The polycrystalline thin film structures utilize direct-gap semiconductors for high absorption while amorphous Si capitalizes on the disorder to enhance absorption and hydrogen to passivate dangling bonds. It is quite amazing that these very defective thin-film materials can still yield high carrier collection efficiencies. Partly this comes from the field-assisted collection and partly from clever passivation of defects and manipulation of grain boundaries. In some cases we are just lucky that nature provides benign or even helpful grain boundaries in materials such as CdTe and Cu(InGa)Se2, although we seem not so fortunate with GaAs grain boundaries. It is now commonly accepted that, not only are grain boundaries effectively passivated during the thin-film growth process or by post-growth treatments, but also that grain boundaries can actually serve as collection channels for carriers. In fact it is not uncommon to find that the polycrystalline thin-film devices outperform their single-crystal counterpart.

In the past two decades there has been remarkable progress in the performance of small, laboratory cells. The common Si device has benefited from light trapping techniques, back-surface fields, and innovative contact designs, III-V multijunctions from high quality epitaxial techniques, a-Si:H from thin, multiple-junction designs that minimize the effects of dangling bonds (Stabler-Wronski effect), and polycrystalline thin films from innovations in low-cost growth methods and post-growth treatments.

However, these type of cells can’t be used for commercial production of electricity since such III-V hetrostructures are not cost efficient. Now as research paper proposed, silicon nanowires could be direct replacement of these exotic structures and using the same confinement principles(quantum dots) it may prove cost efficient as well. Well nanowires are direct bandgap material and we can alter the bandgap anywhere between 2-5volts and thus, providing wavelength selectivity between 200 to 600 nm( E=hc/λ).

What research paper considered is that nanowires oriented along [100] plane, in a square array with equilibrium lattice relaxed to a unit cell with Si-Si bond spacing to be 2.324A and Si-H spacing to be 1.5A. Since the nanowires are direct band gap semiconductors which make them excellent for optical absorptions. All the nanowires examined in this paper show features in the absorption spectrum the correspond to excitonic processes. The lowest excitonic peaks for each of the nanowires occur at 5.25 eV, (232 nm), 3.7 eV (335 nm), and 2.3 eV (539 nm) in the increasing order of size. The corresponding optical wavelength is shown in table adapted from paper. Absorption is tunable from the visible region to the near UV portion of the solar spectrum.

[Image Details: Click on the thumbnail to see the full sized image]

No doubt, such silicon nano wires are excellent and could decrease solar cell costs efficiently. See the research analysis.

These laboratory successes are now being translated into success in the manufacturing of large PV panel sizes in large quantities and with high yield. The worldwide PV market has been growing at 30% to 40% annually for the past five years due partly to market incentives in Japan and Germany, and more recently in some U.S. states.

Recent analyses of the energy payback time for solar systems show that today’s systems pay back the energy used in manufacturing in about 3.5 years for silicon and 2.5 years for thin films.

The decline of manufacturing costs follows nicely an 80% experience curve-costs dropping 20% for each doubling of cumulative production.The newly updated PV Roadmap (http://www.seia.org) envisions PV electricity costing $0.06/kW-hr by 2015 and half of new U.S. electricity generation to be produced by PV by 2025, if some modest and temporary nationwide incentives are enacted. Given the rate of progress in the laboratory and innovations on the production line, this ambitious goal might just be achievable. PV can then begin to play a significant role in reducing greenhouse gas emissions and in improving energy security. Some analysts see PV as the only energy resource that has the potential to supply enough clean power for a sustainable energy future for the world.

[Ref: APS and Optical Absorption Characteristics of Silicon Nanowires for Photovoltaic Applications by Vidur Prakash]

Extreme Tech: Genetic Engineering And Gene Programmer[Part-I]

As I have promised yesterday, the greatest technology which we could achieve by the end of this century, I’m going to suggest it. As title says , it is related to somewhat you can term as genetic engineering. If  I’m not wrong DNA or genome which is a characteristic carrier ,is one of the most complex things and programmed as well. If you might know every property or characteristics which comes to a creature ,is due to different sequencing of  genomes. For example, if  a genome which is directly responsible for the adoption of wing in a Albatrauss[though there are others too which are to make it adequate] , is doped into you probably you can fly high in the sky and you don’t need a jet pack or monoplane to do so. Ohh.. I’m forgetting.. you don’t even need to carry a heavy fuel tank.

The gene itself manages how to handle the pesky things. If a gene is programmed to grow your body bigger and bigger and let the multiplication of cells go on infinitely with no bounds, you would become immortal Homo Sapiens Sapiens Gianta{actually this gene is responsible for continuous growth of plants}. Have  you ever seen a rhino? It has a 5cm thick skin which protects it. Indeed it is the gene which programmed rhino’s body to develop such kinda thick skin.

Why not consider those tiny creepy crawly creatures? Yea, I’m talking of insects and archenids{spiders and other pods}. I don’t find anything with better capabilities than insects according to their size. I’ve read somewhere in wikipedia that a beetle can manipulate 850times more mass to its own[Rhinoceros beetles are also one of the strongest animals on the planet in relation to their own size. They can also survive Nuclear warfare. They can lift up to 850 times their own weight.[1] :ref].

In a laboratory experiment, Rob Knell from Queen Mary,University of London and Leigh Simmons from the University of Western Australia found that the strongest Onthophagus taurus could pull 1,141 times its own body weight. That’s equivalent to a person lifting close to 180,000 pounds (the same as six full double-decker buses). Isn’t it mind boggling?

Just take a insect and try to scratch its outer shell and you’ll find how hard it is and its claws are even more stronger than its shell. I find their claws more strong than the wires of copper and alluminium of same dimension. It may astound you. I’ve put well known spider’s web which is even more strong than steel, aside here[ carbon nanotube could beat it].

Extreme Tech

Finally we can conclude that it is the gene who programs every creature either it’s insect or extremohile viruses and manages to maximize survival in every sort of environment no matter how harsh that is. The final outcome is that we can produce anything by manipulation of genetic codes which is not even attainable by nanotechnology. So what about a gene programming machine? If we could design a gene programming machine[ I’ll call them “Gene Programmer” and creatures produced by Gene Programmer wii be called as Bioprogram], we could create every sort of  bio robot which may seem non existent for today’s technology. Just mingle the genes of Onthophagus taurus and Supersaurus(about 140feet long and 20m high and 120 tonnes weight]and what will you see, would be a giant even more powerful than a godzilla and as long as I can estimate(just estimation) it comes to have power of 1oo godzillas(worth considering is that it is so small than a godzilla was). Or if you are a good programmer then you could do better job at your own will. It doesn’t seem me to be obsolete and perhaps more recent achievement showed that we can do even better in near future. They can be used to terraform planet or even in space colonization. Look forward for second part in  which you I’ll peer review the implications to Gene Programmer at different levels.

‘Survivor’ Black Holes May Be Mid-Sized:NASA News

New evidence from NASA’s Chandra X-ray Observatory and ESA’s XMM-Newton strengthens the case that two mid-sized black holes exist close to the center of a nearby starburst galaxy. These “survivor” black holes avoided falling into the center of the galaxy and could be examples of the seeds required for the growth of supermassive black holes in galaxies, including the one in the Milky Way.

For several decades, scientists have had strong evidence for two distinct classes of black hole: the stellar-mass variety with masses about ten times that of the Sun, and the supermassive ones, located at the center of galaxies, that range from hundreds of thousands to billions of solar masses.

But a mystery has remained: what about black holes that are in between? Evidence for these objects has remained controversial, and until now there were no strong claims of more than one such black hole in a single galaxy. Recently, a team of researchers has found signatures in X-ray data of two mid-sized black holes in the starburst galaxy M82 located 12 million light years from Earth.

“This is the first time that good evidence for two mid-sized black holes has been found in one galaxy,” said Hua Feng of the Tsinghua University in China, who led two papers describing the results. “Their location near the center of the galaxy might provide clues about the origin of the Universe’s largest black holes — supermassive black holes found in the centers of most galaxies.” 

Composite image of the nearby starburst galaxy M82

Composite image of the nearby starburst galaxy M82. Image credit: X-ray: NASA/ CXC/Tsinghua Univ./H. Feng et al.

One possible mechanism for the formation of supermassive black holes involves a chain reaction of collisions of stars in compact star clusters that results in the buildup of extremely massive stars, which then collapse to form intermediate-mass black holes. The star clusters then sink to the center of the galaxy, where the intermediate-mass black holes merge to form a supermassive black hole.

In this picture, clusters that were not massive enough or close enough to the center of the galaxy to fall in would survive, as would any black holes they contain.

“We can’t say whether this process actually occurred in M82, but we do know that both of these possible mid-sized black holes are located in or near star clusters,” said Phil Kaaret from the University of Iowa, who co-authored both papers. “Also, M82 is the nearest place to us where the conditions are similar to those in the early Universe, with lots of stars forming.”

The evidence for these two “survivor” black holes comes from how their X-ray emission varies over time and analysis of their X-ray brightness and spectra, i.e., the distribution of X-rays with energy.

Chandra and XMM-Newton data show that the X-ray emission for one of these objects changes in a distinctive manner similar to stellar-mass black holes found in the Milky Way. Using this information and theoretical models, the team estimated this black hole’s mass is between 12,000 and 43,000 times the mass of the Sun. This mass is large enough for the black hole to generate copious X-rays by pulling gas directly from its surroundings, rather than from a binary companion, like with stellar-mass black holes.

The black hole is located at a projected distance of 290 light years from the center of M82. The authors estimate that, at this close distance, if the black hole was born at the same time as the galaxy and its mass was more than about 30,000 solar masses it would have been pulled into the center of the galaxy. That is, it may have just escaped falling into the supermassive black hole that is presumably located in the center of M82.

The second object, located 600 light years in projection away from the center of M82, was observed by both Chandra and XMM-Newton. During X-ray outbursts, periodic and random variations normally present in the X-ray emission disappear, a strong indication that a disk of hot gas dominates the X-ray emission. A detailed fit of the X-ray data indicates that the disk extends all the way to the innermost stable orbit around the black hole. Similar behavior has been seen from stellar-mass black holes in our Galaxy, but this is the first likely detection in a candidate intermediate-mass black hole.

The radius of the innermost stable orbit depends only on the mass and spin of the black hole. The best model for the X-ray emission implies a rapidly spinning black hole with mass in the range 200 to 800 times the mass of the Sun. The mass agrees with theoretical estimates for a black hole created in a star cluster by runaway collisions of stars.

“This result is one of the strongest pieces of evidence to date for the existence of an intermediate-mass black hole,” said Feng. “This looks just like well-studied examples of stellar-mass black holes, except for being more than 20 times as massive.”

Uranium Is Not Future Energy Source[Part-3]

What will happen when we have no coal, no crude oil and almost no fossil fuel? Would our technological civilization die? Many advocates suggests uranium as a future energy source including Brian Wang of  Next Big Future. However I find many implications which opposes the case of  advocates. I’m not going to suggest alternative energy source right now as B.W. asks in his article [if nuclear fission is not the energy source of the future then weird science needs to compare and present what the alternative is that he supports], but be sure I will do it later with full analysis. I’ve described some of them in my previous article which are here . B.W. has suggested that extraction of Uranium could be economical if we use seawater as a resource of  Uranium but that proved to uneconomical and having very less production. Another issue which can be involved with reactor is that if these could power world for long time[since these are suggested as future source]. Here is a Scientific American report which make it obvious to depict that Uranium is not future energy source.

Most of the 2.8 trillion kilowatt-hours of electricity generated worldwide from nuclear power every year is produced in light-water reactors (LWRs) using low-enriched uranium (LEU) fuel. About 10 metric tons of natural uranium go into producing a metric ton of LEU, which can then be used to generate about 400 million kilowatt-hours of electricity, so present-day reactors require about 70,000 metric tons of natural uranium a year.

According to the NEA, identified uranium resources total 5.5 million metric tons, and an additional 10.5 million metric tons remain undiscovered—a roughly 230-year supply at today’s consumption rate in total. Further exploration and improvements in extraction technology are likely to at least double this estimate over time.

Using more enrichment work could reduce the uranium needs of LWRs by as much as 30 percent per metric ton of LEU. And separating plutonium and uranium from spent LEU and using them to make fresh fuel could reduce requirements by another 30 percent. Taking both steps would cut the uranium requirements of an LWR in half.

The report has considered the current rate of energy consumption whilst it is obvious that  we would acquire far more energy than estimated here . The rate o f energy consumption will considerably depend upon following basics:

File:Population curve.svg

  • Population growth is, of course, a central issue to study how to meet with future energy requirements. Considering the seven hundred years, as I have suggested how much population will inhabit Earth? It is suggested that total population to the end of this century will be over 11 billion. According to wikipedia:
World population estimates milestones
Population
(in billions)
1 2 3 4 5 6 7 8 9
Year 1804 1927 1960 1974 1987 1999 2012 2025 2040
Years elapsed 123 33 14 13 12 13 13 15

The population of the world reached one billion in 1804, two billion in 1927, three billion in 1960, four billion in 1974, five billion in 1987, and six billion in 1999. The population of the world is projected to reach seven billion in 2011 or 2012, eight billion in 2025, and nine billion in 2040 or 2050. Now, I ask B.W. what will be the estimates for given 700 years of future.[Though population growth is separate problem alone,  but significantly affect the energy demand required. Imagine population of one million, whole energy problem would be solved.]

World marketed energy consumption is projected to increase by 44 percent from 2006 to 2030. Total energy demand in the non-OECD countries increases by 73 percent, compared with an increase of 15 percent in the OECD countries.In the IEO2009 reference case—which reflects a scenario in which current laws and policies remain unchanged throughout the projection period—world marketed energy consumption is projected to grow by 44 percent over the 2006 to 2030 period. Total world energy use rises from 472 quadrillion British thermal units (Btu) in 2006 to 552 quadrillion Btu in 2015 and then to 678 quadrillion Btu in 2030 (Figure 1). The current worldwide economic downturn dampens world demand for energy in the near term, as manufacturing and consumer demand for goods and services slows. In the longer term, with economic recovery anticipated after 2010, most nations return to trend growth in income and energy demand.

The most rapid growth in energy demand from 2006 to 2030 is projected for nations outside the Organization for Economic Cooperation and Development (non-OECD nations). Total non-OECD energy consumption increases by 73 percent in theIEO2009 reference case projection, as compared with a 15-percent increase in energy use among the OECD countries. Strong long-term GDP growth in the emerging economies of the non-OECD countries drives the fast-paced growth in energy demand. In all the non-OECD regions combined, economic activity—measured by GDP in purchasing power parity terms—increases by 4.9 percent per year on average, as compared with an average of 2.2 percent per year for the OECD countries.[ref]Figure 1. World Marketed Energy Consumption, 2006-2030 (Quadrillion Btu).  Need help, contact the National Energy Information Center at 202-586-8800.

I will publish more in my next posts. I will also consider other opinion as B.W. suggested in email.

To Be Continued…

%d bloggers like this: