Should We Terraform Mars? A Debate

This is a part of a debate organised by NASA. Science Fiction Meets Science Fact. ‘What are the real possibilities, as well as the potential ramifications, of transforming Mars?’ Terraform debaters left to right, Greg Bear , author of such books as “Moving Mars” and “Darwin’s Radio.”; David Grinspoon , planetary scientist at the Southwest Research Institute; James Kasting , geoscientist at Pennsylvania State University; Christopher McKay , planetary scientist at NASA Ames Research Center.; Lisa Pratt , biogeochemist at Indiana University; Kim Stanley Robinson , author of the “Mars Trilogy” (“Red Mars,” “Green Mars” and “Blue Mars“); John Rummel , planetary protection officer for NASA; moderator Donna Shirley , former manager of NASA’s Mars Exploration Program at the Jet Propulsion Laboratory.

Donna Shirley: Greg, what are the ethics of exploring Mars?

Greg Bear: You usually talk about ethics within your own social group. And if you define someone as being outside your social group, they’re also outside your ethical system, and that’s what’s caused so much trauma, as we seem to be unable to recognize people who look an awful lot like us as being human beings.

When we go to Mars, we’re actually dealing with a problem that’s outside the realm of ethics and more in the realm of enlightened self-interest. We have a number of reasons for preserving Mars as it is. If there’s life there, it’s evolved over the last several billion years, it’s got incredible solutions to incredible problems. If we just go there and willy-nilly ramp it up or tamp it down or try to remold it somehow, we’re going to lose that information. So that’s not to our best interest.

We were talking earlier about having a pharmaceutical expedition to Mars, not just that but a chemical expedition to Mars, people coming and looking for solutions to incredible problems that could occur here on Earth and finding them on Mars. That could generate income unforeseen.

If we talk about ethical issues on a larger scale of how are other beings in the universe going to regard how we treat Mars, that’s a question for Arthur C. Clarke to answer, I think. That’s been more his purview: the large, sometimes sympathetic eye staring at us and judging what we do.

We really have to look within our own goals and our own heart here. And that means we have to stick within our social group, which at this point includes the entire planet. If we decide that Mars is, in a sense, a fellow being, that the life on Mars, if we discover them – and I think that we will discover that Mars is alive – is worthy of protection, then we have to deal with our own variations in ethical judgment.

“I’ve heard a lot of people say, ‘Why should we go to Mars, because look at what human beings have done to Earth.'” -David Grinspoon
Image Credit: NASA

The question is, if it’s an economic reality that Mars is extraordinarily valuable, will we do what we did in North America and Africa and South America and just go there and wreak havoc? And we have to control our baser interests, which is, as many of us have found out recently, very hard to do in this country. So we have a lot of problems to deal with here, internal problems. Because not everyone will agree on an ethical decision and that’s the real problem with making ethical decisions.

Donna Shirley: David, you want to comment on the ethics of terraforming Mars?

David Grinspoon:
Well, one comment I’ve heard about recently, partly in response to the fact that the president has recently proposed new human missions to Mars – of course, that’s not terraforming, but it is human activities on Mars – and I’ve heard a lot of people say, “Why should we go to Mars, because look at what human beings have done to Earth. Look at how badly we’re screwing it up. Look at the human role on Earth. Why should we take our presence and go screw up other places?”

It’s an interesting question, and it causes me to think about the ethics of the human role elsewhere. What are we doing in the solar system, what should we be doing? But, it’s very hard for me to give up on the idea. Maybe because I read too much science fiction when I was a kid, I do have, I have to admit, this utopian view of a long-term human future in space. I think that if we find life on Mars, the ethical question’s going to be much more complicated.

But in my view, I think we’re going to find that Mars does not have life. We may have fossils there. I think it’s the best place in the solar system to find fossils. Of course, I could be wrong about this and I’d love to be wrong about it, and that’s why we need to explore. If the methane observation is borne out, it would be, to me, the first sign that I really have to rethink this, that maybe there is something living there under the ice.

“If the methane observation is borne out, maybe there is something living there under the ice.-David Grinspoon
Image Credit: NASA

But let’s assume for a second that Mars really is dead, and we’ve explored Mars very carefully – and this is not a determination we’ll be able to make without a lot more exploration – but assuming it was, then what about this question. Should human beings go to Mars, because do we deserve to, given what we’ve done to Earth? And to me, the analogy is of a vacant lot versus planting a garden. If Mars is really dead, then to me it’s like a vacant lot, where we have the opportunity to plant a garden. I think, in the long run, that we should.

We’ve heard a lot different possible motivations, economic motivations, or curiosity, but I think ultimately the motivation should be out of love for life, and wanting there to be more life where there’s only death and desolation. And so I think that ethically, in the long run, if we really learn enough to say that Mars is dead, then the ethical imperative is to spread life and bring a dead world to life.

Donna Shirley: Jim, we can’t prove a negative, so how do we know if there’s life or not, if we keep looking and looking and looking. How long should we look? How would we make that decision?

James Kasting: I think Lisa put us on the right track initially. She’s studying subsurface life on Earth. If there’s life on Mars today, it’s subsurface. I think it’s deep subsurface, a kilometer or two down. So I think we do need humans on Mars, because we need them up there building big drilling rigs to drill down kilometers depth and do the type of exploration that Lisa and her group is doing on Earth here. I think that’s going to take not just decades, but probably a couple of centuries before we can really get a good feel for that.

Lake Vostok.
Image Credit: NASA

Donna Shirley: Well, I know, John, at Lake Vostok, one of the big issues is, if we drill into it, our dirty drilling rigs are going to contaminate whatever’s down there. So how do we drill without worrying about contaminating something if it is there?

John Rummel: Well, you accept a little contamination probabilistically that you can allow operations and still try to prevent it. I mean, basically what we can do is try to prevent that which we don’t want to have happen. We can’t ever have a guarantee. The easiest way to prevent the contamination of Mars is to stay here in this room. Or someplace close by.

Greg Bear: That’s known as abstinence.

John Rummel: [laughs]. I also want to point out it’s not necessarily the case that the first thing you want to do on Mars, even if there’s no life, is to change it. We don’t know the advantages of the martian environment. It’s a little bit like the people who go to Arizona for their allergies and start planting crabgrass right off. They wonder why they get that. And it may be that Mars as it is has many benefits. I started working here at NASA Ames as a postdoc with Bob McElroy on controlled ecological life-support systems. There’s a lot we can do with martian environments inside before we move out to the environment of Mars and try to mess with it. So I would highly recommend that not only do we do a thorough job with robotic spacecraft on Mars, but we do a thorough job living inside and trying to figure out what kind of a puzzle Mars presents.

The ALH Meteorite.
Image Credit: NASA/ Johnson Space Center

Donna Shirley: Stan, you dealt with this issue in your book with the Reds versus the Greens. What are some of the ethics of making decisions about terraforming Mars?

Kim Stanley Robinson: Ah, the Reds versus the Greens. This is a question in environmental ethics that has been completely obscured by this possibility of life on Mars.

After the Viking mission, and for about a decade or so, up to the findings of the ALH meteorite, where suddenly martian bacteria were postulated again, we thought of Mars as being a dead rock. And yet there were still people who were very offended at the idea of us going there and changing it, even though it was nothing but rock. So this was an interesting kind of limit case in environmental ethics, because this sense of what has standing. People of a certain class had standing, then all the people had standing, then the higher mammals had standing – in each case it’s sort of an evolutionary process where, in an ethical sense, more and more parts of life had standing, and need consideration and ethical treatment from us. They aren’t just there to be used.

When you get to rock, it seemed to me that there would be very few people (wanting to preserve it). And yet, when I talked about my project, when I was writing it, it was an instinctive thing, that Mars has its own, what environment ethicists would call, “intrinsic worth,” even as a rock. It’s a pretty interesting position. And I had some sympathy for it, because I like rocky places myself. If somebody proposed irrigating and putting forests in Death Valley, I would think of this as a travesty. I have many favorite rockscapes, and a lot of people do.

So, back and forth between Red and Green, and one of the reasons I think that my book was so long was that it was just possible to imagine both sides of this argument for a very long time. And I never really did reconcile it in my own mind except that it seemed to me that Mars offered the solution itself. If you think of Mars as a dead rock and you think it has intrinsic worth, it should not be changed, then you look at the vertical scale of Mars and you think about terraforming, and there’s a 31-kilometer difference between the highest points on Mars and the lowest. I reckoned about 30 percent of the martian surface would stay well above an atmosphere that people could live in, in the lower elevations. So maybe you could have it both ways. I go back and forth on this teeter-totter. But of course now it’s a kind of an older teeter-totter because we have a different problem now.

Links: Colonization of mars[Are We Going To Colonize Mars?]

Why To Colonize Mars?

Our blue planet is suffering from cataclysmic processes including global warming and pollution. Now our Earth is harboring almost seven billion Homo Sapience. In near future it would be even greater problem. Except that there are many problems which force us to establish colonies outside of terran. Here Robert Zubrin, Former chairman of National Space Society, will explain some prospects for Mars colonization.

By Robert Zubrin

Among extraterrestrial bodies in our solar system, Mars is singular in that it possesses all the raw materials required to support not only life, but a new branch of human civilization. This uniqueness is illustrated most clearly if we contrast Mars with the Earth’s Moon, the most frequently cited alternative location for extraterrestrial human colonization.

In contrast to the Moon, Mars is rich in carbon, nitrogen, hydrogen and oxygen, all in biologically readily accessible forms such as carbon dioxide gas, nitrogen gas, and water ice and permafrost. Carbon, nitrogen, and hydrogen are only present on the Moon in parts per million quantities, much like gold in seawater. Oxygen is abundant on the Moon, but only in tightly bound oxides such as silicon dioxide (SiO2), ferrous oxide (Fe2O3), magnesium oxide (MgO), and aluminum oxide (Al2O3), which require very high energy processes to reduce. Current knowledge indicates that if Mars were smooth and all its ice and permafrost melted into liquid water, the entire planet would be covered with an ocean over 100 meters deep. This contrasts strongly with the Moon, which is so dry that if concrete were found there, Lunar colonists would mine it to get the water out. Thus, if plants could be grown in greenhouses on the Moon (an unlikely proposition, as we’ve seen) most of their biomass material would have to be imported.

The Moon is also deficient in about half the metals of interest to industrial society (copper, for example), as well as many other elements of interest such as sulfur and phosphorus. Mars has every required element in abundance. Moreover, on Mars, as on Earth, hydrologic and volcanic processes have occurred that are likely to have consolidated various elements into local concentrations of high-grade mineral ore. Indeed, the geologic history of Mars has been compared to that of Africa, with very optimistic inferences as to its mineral wealth implied as a corollary. In contrast, the Moon has had virtually no history of water or volcanic action, with the result that it is basically composed of trash rocks with very little differentiation into ores that represent useful concentrations of anything interesting.

You can generate power on either the Moon or Mars with solar panels, and here the advantages of the Moon’s clearer skies and closer proximity to the Sun than Mars roughly balances the disadvantage of large energy storage requirements created by the Moon’s 28-day light-dark cycle. But if you wish to manufacture solar panels, so as to create a self-expanding power base, Mars holds an enormous advantage, as only Mars possesses the large supplies of carbon and hydrogen needed to produce the pure silicon required for producing photovoltaic panels and other electronics. In addition, Mars has the potential for wind-generated power while the Moon clearly does not. But both solar and wind offer relatively modest power potential — tens or at most hundreds of kilowatts here or there. To create a vibrant civilization you need a richer power base, and this Mars has both in the short and medium term in the form of its geothermal power resources, which offer potential for large numbers of locally created electricity generating stations in the 10 MW (10,000 kilowatt) class. In the long-term, Mars will enjoy a power-rich economy based upon exploitation of its large domestic resources of deuterium fuel for fusion reactors. Deuterium is five times more common on Mars than it is on Earth, and tens of thousands of times more common on Mars than on the Moon.

But the biggest problem with the Moon, as with all other airless planetary bodies and proposed artificial free-space colonies, is that sunlight is not available in a form useful for growing crops. A single acre of plants on Earth requires four megawatts of sunlight power, a square kilometer needs 1,000 MW. The entire world put together does not produce enough electrical power to illuminate the farms of the state of Rhode Island, that agricultural giant. Growing crops with electrically generated light is just economically hopeless. But you can’t use natural sunlight on the Moon or any other airless body in space unless you put walls on the greenhouse thick enough to shield out solar flares, a requirement that enormously increases the expense of creating cropland. Even if you did that, it wouldn’t do you any good on the Moon, because plants won’t grow in a light/dark cycle lasting 28 days.

But on Mars there is an atmosphere thick enough to protect crops grown on the surface from solar flare. Therefore, thin-walled inflatable plastic greenhouses protected by unpressurized UV-resistant hard-plastic shield domes can be used to rapidly create cropland on the surface. Even without the problems of solar flares and month-long diurnal cycle, such simple greenhouses would be impractical on the Moon as they would create unbearably high temperatures. On Mars, in contrast, the strong greenhouse effect created by such domes would be precisely what is necessary to produce a temperate climate inside. Such domes up to 50 meters in diameter are light enough to be transported from Earth initially, and later on they can be manufactured on Mars out of indigenous materials. Because all the resources to make plastics exist on Mars, networks of such 50- to 100-meter domes couldbe rapidly manufactured and deployed, opening up large areas of the surface to both shirtsleeve human habitation and agriculture. That’s just the beginning, because it will eventually be possible for humans to substantially thicken Mars’ atmosphere by forcing the regolith to outgas its contents through a deliberate program of artificially induced global warming. Once that has been accomplished, the habitation domes could be virtually any size, as they would not have to sustain a pressure differential between their interior and exterior. In fact, once that has been done, it will be possible to raise specially bred crops outside the domes.

The point to be made is that unlike colonists on any known extraterrestrial body, Martian colonists will be able to live on the surface, not in tunnels, and move about freely and grow crops in the light of day. Mars is a place where humans can live and multiply to large numbers, supporting themselves with products of every description made out of indigenous materials. Mars is thus a place where an actual civilization, not just a mining or scientific outpost, can be developed. And significantly for interplanetary commerce, Mars and Earth are the only two locations in the solar system where humans will be able to grow crops for export.

Interplanetary Commerce

Mars is the best target for colonization in the solar system because it has by far the greatest potential for self-sufficiency. Nevertheless, even with optimistic extrapolation of robotic manufacturing techniques, Mars will not have the division of labor required to make it fully self-sufficient until its population numbers in the millions. Thus, for decades and perhaps longer, it will be necessary, and forever desirable, for Mars to be able to import specialized manufactured goods from Earth. These goods can be fairly limited in mass, as only small portions (by weight) of even very high-tech goods are actually complex. Nevertheless, these smaller sophisticated items will have to be paid for, and the high costs of Earth-launch and interplanetary transport will greatly increase their price. What can Mars possibly export back to Earth in return?

It is this question that has caused many to incorrectly deem Mars colonization intractable, or at least inferior in prospect to the Moon. For example, much has been made of the fact that the Moon has indigenous supplies of helium-3, an isotope not found on Earth and which could be of considerable value as a fuel for second generation thermonuclear fusion reactors. Mars has no known helium-3 resources. On the other hand, because of its complex geologic history, Mars may have concentrated mineral ores, with much greater concentrations of precious metal ores readily available than is currently the case on Earth — because the terrestrial ores have been heavily scavenged by humans for the past 5,000 years. If concentrated supplies of metals of equal or greater value than silver (such as germanium, hafnium, lanthanum, cerium, rhenium, samarium, gallium, gadolinium, gold, palladium, iridium, rubidium, platinum, rhodium, europium, and a host of others) were available on Mars, they could potentially be transported back to Earth for a substantial profit. Reusable Mars-surface based single-stage-to-orbit vehicles would haul cargoes to Mars orbit for transportation to Earth via either cheap expendable chemical stages manufactured on Mars or reusable cycling solar or magnetic sail-powered interplanetary spacecraft. The existence of such Martian precious metal ores, however, is still hypothetical.

But there is one commercial resource that is known to exist ubiquitously on Mars in large amount — deuterium. Deuterium, the heavy isotope of hydrogen, occurs as 166 out of every million hydrogen atoms on Earth, but comprises 833 out of every million hydrogen atoms on Mars. Deuterium is the key fuel not only for both first and second generation fusion reactors, but it is also an essential material needed by the nuclear power industry today. Even with cheap power, deuterium is very expensive; its current market value on Earth is about $10,000 per kilogram, roughly fifty times as valuable as silver or 70% as valuable as gold. This is in today’s pre-fusion economy. Once fusion reactors go into widespread use deuterium prices will increase. All the in-situ chemical processes required to produce the fuel, oxygen, and plastics necessary to run a Mars settlement require water electrolysis as an intermediate step. As a by product of these operations, millions, perhaps billions, of dollars worth of deuterium will be produced.

Ideas may be another possible export for Martian colonists. Just as the labor shortage prevalent in colonial and nineteenth century America drove the creation of “Yankee ingenuity’s” flood of inventions, so the conditions of extreme labor shortage combined with a technological culture that shuns impractical legislative constraints against innovation will tend to drive Martian ingenuity to produce wave after wave of invention in energy production, automation and robotics, biotechnology, and other areas. These inventions, licensed on Earth, could finance Mars even as they revolutionize and advance terrestrial living standards as forcefully as nineteenth century American invention changed Europe and ultimately the rest of the world as well.

Inventions produced as a matter of necessity by a practical intellectual culture stressed by frontier conditions can make Mars rich, but invention and direct export to Earth are not the only ways that Martians will be able to make a fortune. The other route is via trade to the asteroid belt, the band of small, mineral-rich bodies lying between the orbits of Mars and Jupiter. There are about 5,000 asteroids known today, of which about 98% are in the “Main Belt” lying between Mars and Jupiter, with an average distance from the Sun of about 2.7 astronomical units, or AU. (The Earth is 1.0 AU from the Sun.) Of the remaining two percent known as the near-Earth asteroids, about 90% orbit closer to Mars than to the Earth. Collectively, these asteroids represent an enormous stockpile of mineral wealth in the form of platinum group and other valuable metals.

Miners operating among the asteroids will be unable to produce their necessary supplies locally. There will thus be a need to export food and other necessary goods from either Earth or Mars to the Main Belt. Mars has an overwhelming positional advantage as a location from which to conduct such trade.

Historical Analogies

The primary analogy I wish to draw is that Mars is to the new age of exploration as North America was to the last. The Earth’s Moon, close to the metropolitan planet but impoverished in resources, compares to Greenland. Other destinations, such as the Main Belt asteroids, may be rich in potential future exports to Earth but lack the preconditions for the creation of a fully developed indigenous society; these compare to the West Indies. Only Mars has the full set of resources required to develop a native civilization, and only Mars is a viable target for true colonization. Like America in its relationship to Britain and the West Indies, Mars has a positional advantage that will allow it to participate in a useful way to support extractive activities on behalf of Earth in the asteroid belt and elsewhere.

But despite the shortsighted calculations of eighteenth-century European statesmen and financiers, the true value of America never was as a logistical support base for West Indies sugar and spice trade, inland fur trade, or as a potential market for manufactured goods. The true value of America was as the future home for a new branch of human civilization, one that as a combined result of its humanistic antecedents and its frontier conditions was able to develop into the most powerful engine for human progress and economic growth the world had ever seen. The wealth of America was in fact that she could support people, and that the right kind of people chose to go to her. People create wealth. People are wealth and power. Every feature of Frontier American life that acted to create a practical can-do culture of innovating people will apply to Mars a hundred-fold.

Mars is a harsher place than any on Earth. But provided one can survive the regimen, it is the toughest schools that are the best. The Martians shall do well.

Key To Space Time Engineering: Huge Magnetic Field Created

Large magnetic fields above 150T were never produced before but this time scientists has successfully created the magnetic Field of 300T. Larger magnetic fields could be implemented into space time engineering, however this is not up to extent we need yet, provide a key to future space time engineering. Graphene, the extraordinary form of carbon that consists of a single layer of carbon atoms, has produced another in a long list of experimental surprises. In the current issue of the journal Science, a multi-institutional team of researchers headed by Michael Crommie, a faculty senior scientist in the Materials Sciences Division at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, reports the creation of pseudo-magnetic fields far stronger than the strongest magnetic fields ever sustained in a laboratory – just by putting the right kind of strain onto a patch of graphene.

“We have shown experimentally that when graphene is stretched to form nanobubbles on a platinum substrate, electrons behave as if they were subject to magnetic fields in excess of 300 tesla, even though no magnetic field has actually been applied,” says Crommie. “This is a completely new physical effect that has no counterpart in any other condensed matter system.”

Crommie notes that “for over 100 years people have been sticking materials into magnetic fields to see how the electrons behave, but it’s impossible to sustain tremendously strong magnetic fields in a laboratory setting.” The current record is 85 tesla for a field that lasts only thousandths of a second. When stronger fields are created, the magnets blow themselves apart.

The ability to make electrons behave as if they were in magnetic fields of 300 tesla or more – just by stretching graphene – offers a new window on a source of important applications and fundamental scientific discoveries going back over a century. This is made possible by graphene’s electronic behavior, which is unlike any other material’s.

[Image Details: In this scanning tunneling microscopy image of a graphene nanobubble, the hexagonal two-dimensional graphene crystal is seen distorted and stretched along three main axes. The strain creates pseudo-magnetic fields far stronger than any magnetic field ever produced in the laboratory. ]

A carbon atom has four valence electrons; in graphene (and in graphite, a stack of graphene layers), three electrons bond in a plane with their neighbors to form a strong hexagonal pattern, like chicken-wire. The fourth electron sticks up out of the plane and is free to hop from one atom to the next. The latter pi-bond electrons act as if they have no mass at all, like photons. They can move at almost one percent of the speed of light.

The idea that a deformation of graphene might lead to the appearance of a pseudo-magnetic field first arose even before graphene sheets had been isolated, in the context of carbon nanotubes (which are simply rolled-up graphene). In early 2010, theorist Francisco Guinea of the Institute of Materials Science of Madrid and his colleagues developed these ideas and predicted that if graphene could be stretched along its three main crystallographic directions, it would effectively act as though it were placed in a uniform magnetic field. This is because strain changes the bond lengths between atoms and affects the way electrons move between them. The pseudo-magnetic field would reveal itself through its effects on electron orbits.

In classical physics, electrons in a magnetic field travel in circles called cyclotron orbits. These were named following Ernest Lawrence’s invention of the cyclotron, because cyclotrons continuously accelerate charged particles (protons, in Lawrence’s case) in a curving path induced by a strong field.

Viewed quantum mechanically, however, cyclotron orbits become quantized and exhibit discrete energy levels. Called Landau levels, these correspond to energies where constructive interference occurs in an orbiting electron’s quantum wave function. The number of electrons occupying each Landau level depends on the strength of the field – the stronger the field, the more energy spacing between Landau levels, and the denser the electron states become at each level – which is a key feature of the predicted pseudo-magnetic fields in graphene.

A serendipitous discovery

In the patch of graphene inside the roughly circular indentation on a platinum substrate, four triangular nanobubbles appear at the edge of the patch and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows electron densities clustering at discrete Landau levels. Pseudo-magnetic fields are strongest at regions of greatest curvature.

[Image Details: A patch of graphene at the surface of a platinum substrate exhibits four triangular nanobubbles at its edges and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows local electron densities clustering in peaks at discrete Landau-level energies. Pseudo-magnetic fields are strongest at regions of greatest curvature.]

Describing their experimental discovery, Crommie says, “We had the benefit of a remarkable stroke of serendipity.”

Crommie’s research group had been using a scanning tunneling microscope to study graphene monolayers grown on a platinum substrate. A scanning tunneling microscope works by using a sharp needle probe that skims along the surface of a material to measure minute changes in electrical current, revealing the density of electron states at each point in the scan while building an image of the surface.

Crommie was meeting with a visiting theorist from Boston University, Antonio Castro Neto, about a completely different topic when a group member came into his office with the latest data. It showed nanobubbles, little pyramid-like protrusions, in a patch of graphene on the platinum surface and associated with the graphene nanobubbles there were distinct peaks in the density of electron states. Crommie says his visitor, Castro Neto, took one look and said, “That looks like the Landau levels predicted for strained graphene.”

Sure enough, close examination of the triangular bubbles revealed that their chicken-wire lattice had been stretched precisely along the three axes needed to induce the strain orientation that Guinea and his coworkers had predicted would give rise to pseudo-magnetic fields. The greater the curvature of the bubbles, the greater the strain, and the greater the strength of the pseudo-magnetic field. The increased density of electron states revealed by scanning tunneling spectroscopy corresponded to Landau levels, in some cases indicating giant pseudo-magnetic fields of 300 tesla or more.

“Getting the right strain resulted from a combination of factors,” Crommie says. “To grow graphene on the platinum we had exposed the platinum to ethylene” – a simple compound of carbon and hydrogen – “and at high temperature the carbon atoms formed a sheet of graphene whose orientation was determined by the platinum’s lattice structure.”

The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), the colors indicate height, not field strength, but the measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model. To get the highest resolution from the scanning tunneling microscope, the system was then cooled to a few degrees above absolute zero. Both the graphene and the platinum contracted – but the platinum shrank more, with the result that excess graphene pushed up into bubbles, measuring four to 10 nanometers (billionths of a meter) across and from a third to more than two nanometers high. To confirm that the experimental observations were consistent with theoretical predictions, Castro Neto worked with Guinea to model a nanobubble typical of those found by the Crommie group. The resulting theoretical picture was a near-match to what the experimenters had observed: a strain-induced pseudo-magnetic field some 200 to 400 tesla strong in the regions of greatest strain, for nanobubbles of the correct size.

[Image Details: The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), colors indicate height, not field strength, but measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model.]

“Controlling where electrons live and how they move is an essential feature of all electronic devices,” says Crommie. “New types of control allow us to create new devices, and so our demonstration of strain engineering in graphene provides an entirely new way for mechanically controlling electronic structure in graphene. The effect is so strong that we could do it at room temperature.”

The opportunities for basic science with strain engineering are also huge. For example, in strong pseudo-magnetic fields electrons orbit in tight circles that bump up against one another, potentially leading to novel electron-electron interactions. Says Crommie, “this is the kind of physics that physicists love to explore.”

“Strain-induced pseudo-magnetic fields greater than 300 tesla in graphene nanobubbles,” by Niv Levy, Sarah Burke, Kacey Meaker, Melissa Panlasigui, Alex Zettl, Francisco Guinea, Antonio Castro Neto, and Michael Crommie, appears in the July 30 issue of Science. The work was supported by the Department of Energy’s Office of Science and by the Office of Naval Research. I’ve contacted Crommy to provide more details of research. Hope, soon I’ll get a response from him,

[Source: News Center]

New Silicon Nanowires Could Make Photovoltaic Devices More Efficient

Future energy crisis is not a single problem alone, various challenges are before us which haven’t been solved yet. Well, this article will not cover future energy challenges but it would suggest a solution to future energy crisis based on recent research. Our whole energy problem would be solved if  we could optimize solar cells more efficiently. What we need as energy source, is ultimately electricity, no matter how you are getting it. Either get it from nuclear resources, water or solar cells. Well, solar cells could solve our problems more efficiently and solar cells are potential source of clean and renewable energy. Photovoltaic cell[PV Devices] are excellent as our future energy source.

Although early photovoltaic (PV) cells and modules were used in space and other off-grid applications where their value is high, currently about 70% of PV is grid- connected which imposes major cost pressures from conventional sources of electricity. Yet the potential benefits of its large-scale use are enormous and PV now appears to be meeting the challenge with annual growth rates above 30% for the past five years.

More than 90% of PV is currently made of Si modules assembled from small 4-12 inch crystalline or multicrystalline wafers which, like most electronics, can be individually tested before assembly into modules. However, the newer thin-film technologies are monolithically integrated devices approximately 1 m2 in size which cannot have even occasional shunts or weak diodes without ruining the manufacturing yield. Thus, these devices require the deposition of many thin semiconducting layers on glass, stainless steel or polymer and all layers must function well a square meter at a time or the device fails. This is the challenge of PV technology-high efficiency, high uniformity, and high yield over large areas to form devices that can operate with repeated temperature cycles from —40 C to 100 C with a provable twenty- year lifetime and a cost of less than a penny per square centimeter.

[Image Details: Typical construction of solar cell]

Solar cells work and they last. The first cell made at Bell Labs in 1954 is still functioning. Solar cells continue to play a major role in the success of space exploration—witness the Mars rovers. Today’s commercial solar panels, whether of crystalline Si, thin amorphous, or polycrystalline films, are typically guaranteed for 20 years—unheard of reliability for a consumer product. However, PV still accounts for less than 10-5 of total energy usage world-wide. In the US, electricity produced by PV costs upwards of $0.25/ kW-hr whereas the cost of electricity production by coal is less than $0.04 /kW-hr.

It seems fair to ask what limits the performance of solar modules and is there hope of ever reaching cost- competitive generation of PV electricity?

The photogeneration of a tiny amount of current was first observed by Adams and Day in 1877 in selenium. However, it was not until 1954 that Chapin, Fuller and Pearson at Bell Labs obtained significant power generation from a Si cell. Their first cells used a thick lithium-doped n-layer on p-Si, but efficiencies rose well above 5% with a very thin phosphorous-doped n-Si layer at the top.

The traditional Si solar cell is a homojunction device. The sketch of first image indicates the typical construction of the semiconductor part of a Si cell. It might have a p-type base with an acceptor (typically boron or aluminum) doping level of NA = 1 x 1015 cm-3 and a diffused n-type window/emitter layer with ND = 1 x 1020 cm-3 (typically phosphorus). The Fermi level of the n-type (p-type) side will be near the conduction (valence) band edge so that donor-released electrons will diffuse into the p-type side to occupy lower energy states there, until the exposed space charge (ionized donors in the n-type region, and ionized acceptors in the p-type) produces a field large enough to prevent further diffusion. Often a very heavily doped region is used at the back contact to produce a back surface field (BSF) that aids in hole collection and rejects electrons.

In the ideal case of constant doping densities in the two regions, the depletion width, W, is readily calculated from the Poisson equation, and lies mostly in the lightly doped p-type region. The electric field has its maximum at the metallurgical interface between the n- and p-type regions and typically reaches 104 V/cm or more. Such fields are extremely effective at separating the photogenerated electron-hole pairs. Silicon with its indirect gap has relatively weak light absorption requiring about 10-20μm of material to absorb most of the above-band-gap, near-infrared and red light. (Direct-gap materials such as GaAs, CdTe, Cu(InGa)Se2 and even a-Si:H need only ~0.5μm or less for full optical absorption.) The weak absorption in crystalline Si means that a significant fraction of the above-band-gap photons will generate carriers in the neutral region where the minority carrier lifetime must be very long to allow for long diffusion lengths. By contrast, carrier generation in the direct-gap semiconductors can be entirely in the depletion region where collection is through field-assisted drift.

A high quality Si cell might have a hole lifetime in the heavily doped n-type region of tp=1μs and corresponding diffusion length of Lp=12 μm, whereas, in the more lightly doped p-type region the minority carriers might have tn=350 μs and Ln=1100 μm. Often few carriers are collected from the heavily doped, n-type region so strongly-absorbed blue light does not generate much photocurrent. Usually the n-type emitter layer is therefore kept very thin. The long diffusion length of electrons in the p- type region is a consequence of the long electron lifetime due to low doping and of the higher mobility of electrons compared with holes. This is typical of most semiconductors so that the most common solar cell configuration is an “n-on-p” with the p-type semiconductor serving as the “absorber.”

[Image Details: Triple junction cell]

Current generated by an ideal, single-junction solar cell is the integral of the solar spectrum above the semiconductor band gap and the voltage is approximately 2/3 of the band gap. Note that any excess photon energy above the band edge will be lost as the kinetic energy of the electron and hole pair relaxes to thermal motion in picoseconds and simply heats the cell. Thus single-junction solar cell efficiency is limited to about 30% for band gaps near 1.5 eV. Si cells have reached 24%.

This limit can be exceeded by multijunction devices since the high photon energies are absorbed in wider-band-gap component cells. In fact the III-V materials, benefiting like Si from research for electronics have been very successful at achieving very high efficiency, reaching 35% with the monolithically stacked three-junction, two-terminal structure, GaInP/GaAs/Ge. These tandem devices must be critically engineered to have exactly matched current generation in each of the junctions and therefore are sensitive to changes in the solar spectrum during the day. A sketch of a triple-junction, two-terminal amorphous silicon cell is shown in second image.

Polycrystalline and amorphous thin-film cells use inexpensive glass, metal foil or polymer substrates to reduce cost. The polycrystalline thin film structures utilize direct-gap semiconductors for high absorption while amorphous Si capitalizes on the disorder to enhance absorption and hydrogen to passivate dangling bonds. It is quite amazing that these very defective thin-film materials can still yield high carrier collection efficiencies. Partly this comes from the field-assisted collection and partly from clever passivation of defects and manipulation of grain boundaries. In some cases we are just lucky that nature provides benign or even helpful grain boundaries in materials such as CdTe and Cu(InGa)Se2, although we seem not so fortunate with GaAs grain boundaries. It is now commonly accepted that, not only are grain boundaries effectively passivated during the thin-film growth process or by post-growth treatments, but also that grain boundaries can actually serve as collection channels for carriers. In fact it is not uncommon to find that the polycrystalline thin-film devices outperform their single-crystal counterpart.

In the past two decades there has been remarkable progress in the performance of small, laboratory cells. The common Si device has benefited from light trapping techniques, back-surface fields, and innovative contact designs, III-V multijunctions from high quality epitaxial techniques, a-Si:H from thin, multiple-junction designs that minimize the effects of dangling bonds (Stabler-Wronski effect), and polycrystalline thin films from innovations in low-cost growth methods and post-growth treatments.

However, these type of cells can’t be used for commercial production of electricity since such III-V hetrostructures are not cost efficient. Now as research paper proposed, silicon nanowires could be direct replacement of these exotic structures and using the same confinement principles(quantum dots) it may prove cost efficient as well. Well nanowires are direct bandgap material and we can alter the bandgap anywhere between 2-5volts and thus, providing wavelength selectivity between 200 to 600 nm( E=hc/λ).

What research paper considered is that nanowires oriented along [100] plane, in a square array with equilibrium lattice relaxed to a unit cell with Si-Si bond spacing to be 2.324A and Si-H spacing to be 1.5A. Since the nanowires are direct band gap semiconductors which make them excellent for optical absorptions. All the nanowires examined in this paper show features in the absorption spectrum the correspond to excitonic processes. The lowest excitonic peaks for each of the nanowires occur at 5.25 eV, (232 nm), 3.7 eV (335 nm), and 2.3 eV (539 nm) in the increasing order of size. The corresponding optical wavelength is shown in table adapted from paper. Absorption is tunable from the visible region to the near UV portion of the solar spectrum.

[Image Details: Click on the thumbnail to see the full sized image]

No doubt, such silicon nano wires are excellent and could decrease solar cell costs efficiently. See the research analysis.

These laboratory successes are now being translated into success in the manufacturing of large PV panel sizes in large quantities and with high yield. The worldwide PV market has been growing at 30% to 40% annually for the past five years due partly to market incentives in Japan and Germany, and more recently in some U.S. states.

Recent analyses of the energy payback time for solar systems show that today’s systems pay back the energy used in manufacturing in about 3.5 years for silicon and 2.5 years for thin films.

The decline of manufacturing costs follows nicely an 80% experience curve-costs dropping 20% for each doubling of cumulative production.The newly updated PV Roadmap ( envisions PV electricity costing $0.06/kW-hr by 2015 and half of new U.S. electricity generation to be produced by PV by 2025, if some modest and temporary nationwide incentives are enacted. Given the rate of progress in the laboratory and innovations on the production line, this ambitious goal might just be achievable. PV can then begin to play a significant role in reducing greenhouse gas emissions and in improving energy security. Some analysts see PV as the only energy resource that has the potential to supply enough clean power for a sustainable energy future for the world.

[Ref: APS and Optical Absorption Characteristics of Silicon Nanowires for Photovoltaic Applications by Vidur Prakash]

Extreme Tech: Genetic Engineering And Gene Programmer[Part-I]

As I have promised yesterday, the greatest technology which we could achieve by the end of this century, I’m going to suggest it. As title says , it is related to somewhat you can term as genetic engineering. If  I’m not wrong DNA or genome which is a characteristic carrier ,is one of the most complex things and programmed as well. If you might know every property or characteristics which comes to a creature ,is due to different sequencing of  genomes. For example, if  a genome which is directly responsible for the adoption of wing in a Albatrauss[though there are others too which are to make it adequate] , is doped into you probably you can fly high in the sky and you don’t need a jet pack or monoplane to do so. Ohh.. I’m forgetting.. you don’t even need to carry a heavy fuel tank.

The gene itself manages how to handle the pesky things. If a gene is programmed to grow your body bigger and bigger and let the multiplication of cells go on infinitely with no bounds, you would become immortal Homo Sapiens Sapiens Gianta{actually this gene is responsible for continuous growth of plants}. Have  you ever seen a rhino? It has a 5cm thick skin which protects it. Indeed it is the gene which programmed rhino’s body to develop such kinda thick skin.

Why not consider those tiny creepy crawly creatures? Yea, I’m talking of insects and archenids{spiders and other pods}. I don’t find anything with better capabilities than insects according to their size. I’ve read somewhere in wikipedia that a beetle can manipulate 850times more mass to its own[Rhinoceros beetles are also one of the strongest animals on the planet in relation to their own size. They can also survive Nuclear warfare. They can lift up to 850 times their own weight.[1] :ref].

In a laboratory experiment, Rob Knell from Queen Mary,University of London and Leigh Simmons from the University of Western Australia found that the strongest Onthophagus taurus could pull 1,141 times its own body weight. That’s equivalent to a person lifting close to 180,000 pounds (the same as six full double-decker buses). Isn’t it mind boggling?

Just take a insect and try to scratch its outer shell and you’ll find how hard it is and its claws are even more stronger than its shell. I find their claws more strong than the wires of copper and alluminium of same dimension. It may astound you. I’ve put well known spider’s web which is even more strong than steel, aside here[ carbon nanotube could beat it].

Extreme Tech

Finally we can conclude that it is the gene who programs every creature either it’s insect or extremohile viruses and manages to maximize survival in every sort of environment no matter how harsh that is. The final outcome is that we can produce anything by manipulation of genetic codes which is not even attainable by nanotechnology. So what about a gene programming machine? If we could design a gene programming machine[ I’ll call them “Gene Programmer” and creatures produced by Gene Programmer wii be called as Bioprogram], we could create every sort of  bio robot which may seem non existent for today’s technology. Just mingle the genes of Onthophagus taurus and Supersaurus(about 140feet long and 20m high and 120 tonnes weight]and what will you see, would be a giant even more powerful than a godzilla and as long as I can estimate(just estimation) it comes to have power of 1oo godzillas(worth considering is that it is so small than a godzilla was). Or if you are a good programmer then you could do better job at your own will. It doesn’t seem me to be obsolete and perhaps more recent achievement showed that we can do even better in near future. They can be used to terraform planet or even in space colonization. Look forward for second part in  which you I’ll peer review the implications to Gene Programmer at different levels.

‘Survivor’ Black Holes May Be Mid-Sized:NASA News

New evidence from NASA’s Chandra X-ray Observatory and ESA’s XMM-Newton strengthens the case that two mid-sized black holes exist close to the center of a nearby starburst galaxy. These “survivor” black holes avoided falling into the center of the galaxy and could be examples of the seeds required for the growth of supermassive black holes in galaxies, including the one in the Milky Way.

For several decades, scientists have had strong evidence for two distinct classes of black hole: the stellar-mass variety with masses about ten times that of the Sun, and the supermassive ones, located at the center of galaxies, that range from hundreds of thousands to billions of solar masses.

But a mystery has remained: what about black holes that are in between? Evidence for these objects has remained controversial, and until now there were no strong claims of more than one such black hole in a single galaxy. Recently, a team of researchers has found signatures in X-ray data of two mid-sized black holes in the starburst galaxy M82 located 12 million light years from Earth.

“This is the first time that good evidence for two mid-sized black holes has been found in one galaxy,” said Hua Feng of the Tsinghua University in China, who led two papers describing the results. “Their location near the center of the galaxy might provide clues about the origin of the Universe’s largest black holes — supermassive black holes found in the centers of most galaxies.” 

Composite image of the nearby starburst galaxy M82

Composite image of the nearby starburst galaxy M82. Image credit: X-ray: NASA/ CXC/Tsinghua Univ./H. Feng et al.

One possible mechanism for the formation of supermassive black holes involves a chain reaction of collisions of stars in compact star clusters that results in the buildup of extremely massive stars, which then collapse to form intermediate-mass black holes. The star clusters then sink to the center of the galaxy, where the intermediate-mass black holes merge to form a supermassive black hole.

In this picture, clusters that were not massive enough or close enough to the center of the galaxy to fall in would survive, as would any black holes they contain.

“We can’t say whether this process actually occurred in M82, but we do know that both of these possible mid-sized black holes are located in or near star clusters,” said Phil Kaaret from the University of Iowa, who co-authored both papers. “Also, M82 is the nearest place to us where the conditions are similar to those in the early Universe, with lots of stars forming.”

The evidence for these two “survivor” black holes comes from how their X-ray emission varies over time and analysis of their X-ray brightness and spectra, i.e., the distribution of X-rays with energy.

Chandra and XMM-Newton data show that the X-ray emission for one of these objects changes in a distinctive manner similar to stellar-mass black holes found in the Milky Way. Using this information and theoretical models, the team estimated this black hole’s mass is between 12,000 and 43,000 times the mass of the Sun. This mass is large enough for the black hole to generate copious X-rays by pulling gas directly from its surroundings, rather than from a binary companion, like with stellar-mass black holes.

The black hole is located at a projected distance of 290 light years from the center of M82. The authors estimate that, at this close distance, if the black hole was born at the same time as the galaxy and its mass was more than about 30,000 solar masses it would have been pulled into the center of the galaxy. That is, it may have just escaped falling into the supermassive black hole that is presumably located in the center of M82.

The second object, located 600 light years in projection away from the center of M82, was observed by both Chandra and XMM-Newton. During X-ray outbursts, periodic and random variations normally present in the X-ray emission disappear, a strong indication that a disk of hot gas dominates the X-ray emission. A detailed fit of the X-ray data indicates that the disk extends all the way to the innermost stable orbit around the black hole. Similar behavior has been seen from stellar-mass black holes in our Galaxy, but this is the first likely detection in a candidate intermediate-mass black hole.

The radius of the innermost stable orbit depends only on the mass and spin of the black hole. The best model for the X-ray emission implies a rapidly spinning black hole with mass in the range 200 to 800 times the mass of the Sun. The mass agrees with theoretical estimates for a black hole created in a star cluster by runaway collisions of stars.

“This result is one of the strongest pieces of evidence to date for the existence of an intermediate-mass black hole,” said Feng. “This looks just like well-studied examples of stellar-mass black holes, except for being more than 20 times as massive.”

Uranium Is Not Future Energy Source[Part-3]

What will happen when we have no coal, no crude oil and almost no fossil fuel? Would our technological civilization die? Many advocates suggests uranium as a future energy source including Brian Wang of  Next Big Future. However I find many implications which opposes the case of  advocates. I’m not going to suggest alternative energy source right now as B.W. asks in his article [if nuclear fission is not the energy source of the future then weird science needs to compare and present what the alternative is that he supports], but be sure I will do it later with full analysis. I’ve described some of them in my previous article which are here . B.W. has suggested that extraction of Uranium could be economical if we use seawater as a resource of  Uranium but that proved to uneconomical and having very less production. Another issue which can be involved with reactor is that if these could power world for long time[since these are suggested as future source]. Here is a Scientific American report which make it obvious to depict that Uranium is not future energy source.

Most of the 2.8 trillion kilowatt-hours of electricity generated worldwide from nuclear power every year is produced in light-water reactors (LWRs) using low-enriched uranium (LEU) fuel. About 10 metric tons of natural uranium go into producing a metric ton of LEU, which can then be used to generate about 400 million kilowatt-hours of electricity, so present-day reactors require about 70,000 metric tons of natural uranium a year.

According to the NEA, identified uranium resources total 5.5 million metric tons, and an additional 10.5 million metric tons remain undiscovered—a roughly 230-year supply at today’s consumption rate in total. Further exploration and improvements in extraction technology are likely to at least double this estimate over time.

Using more enrichment work could reduce the uranium needs of LWRs by as much as 30 percent per metric ton of LEU. And separating plutonium and uranium from spent LEU and using them to make fresh fuel could reduce requirements by another 30 percent. Taking both steps would cut the uranium requirements of an LWR in half.

The report has considered the current rate of energy consumption whilst it is obvious that  we would acquire far more energy than estimated here . The rate o f energy consumption will considerably depend upon following basics:

File:Population curve.svg

  • Population growth is, of course, a central issue to study how to meet with future energy requirements. Considering the seven hundred years, as I have suggested how much population will inhabit Earth? It is suggested that total population to the end of this century will be over 11 billion. According to wikipedia:
World population estimates milestones
(in billions)
1 2 3 4 5 6 7 8 9
Year 1804 1927 1960 1974 1987 1999 2012 2025 2040
Years elapsed 123 33 14 13 12 13 13 15

The population of the world reached one billion in 1804, two billion in 1927, three billion in 1960, four billion in 1974, five billion in 1987, and six billion in 1999. The population of the world is projected to reach seven billion in 2011 or 2012, eight billion in 2025, and nine billion in 2040 or 2050. Now, I ask B.W. what will be the estimates for given 700 years of future.[Though population growth is separate problem alone,  but significantly affect the energy demand required. Imagine population of one million, whole energy problem would be solved.]

World marketed energy consumption is projected to increase by 44 percent from 2006 to 2030. Total energy demand in the non-OECD countries increases by 73 percent, compared with an increase of 15 percent in the OECD countries.In the IEO2009 reference case—which reflects a scenario in which current laws and policies remain unchanged throughout the projection period—world marketed energy consumption is projected to grow by 44 percent over the 2006 to 2030 period. Total world energy use rises from 472 quadrillion British thermal units (Btu) in 2006 to 552 quadrillion Btu in 2015 and then to 678 quadrillion Btu in 2030 (Figure 1). The current worldwide economic downturn dampens world demand for energy in the near term, as manufacturing and consumer demand for goods and services slows. In the longer term, with economic recovery anticipated after 2010, most nations return to trend growth in income and energy demand.

The most rapid growth in energy demand from 2006 to 2030 is projected for nations outside the Organization for Economic Cooperation and Development (non-OECD nations). Total non-OECD energy consumption increases by 73 percent in theIEO2009 reference case projection, as compared with a 15-percent increase in energy use among the OECD countries. Strong long-term GDP growth in the emerging economies of the non-OECD countries drives the fast-paced growth in energy demand. In all the non-OECD regions combined, economic activity—measured by GDP in purchasing power parity terms—increases by 4.9 percent per year on average, as compared with an average of 2.2 percent per year for the OECD countries.[ref]Figure 1. World Marketed Energy Consumption, 2006-2030 (Quadrillion Btu).  Need help, contact the National Energy Information Center at 202-586-8800.

I will publish more in my next posts. I will also consider other opinion as B.W. suggested in email.

To Be Continued…

Uranium Is Not Future Energy Source[Part-2]

More recently Mr. Brian Wang of  Next Big Future has presented some opposing issues against my article Uranium Is Not Future Energy Source. He has provided some critical issues against this article. More interestingly, he has provided some cool links supporting Uranium as future energy source like Japan has a project in development for large scale recovery of uranium from seawater and Sparton Resources is running projects to obtain uranium from coal ash. It seems his mainly focusing towards presenting the evidences of abundance  of uranium ores and efforts to make uranium extraction cost efficient. He hasn’t analyzed any other cases involving radioactive waste management and environmental impacts, which makes medantory cases for being a future energy source. He is mainly focused to empower his argument “since uranium is abundant, hence it is future energy source.” Here I would like to add some more logical but baffling complications involved with Uranium as a future energy source. I would like to continue Uranium Debate by discussing these serious issues and providing some counterpoints to Uranium as a future energy source. Now data says that coal, petrol and other fossil fuels would be lasted more 50 to 70 years[approx.] and we need an alternative energy source{it could be either exotic or conventional as suggested solar energy, wind energy tidal energy and many others. Since we can utilize fossil fuel to the last  of this century and it doesn’t mean fossil fuel is a future energy source. So here is a question What are the basic ingredients to be a future energy source. I would like to make some basic premiss of  being a future energy source:

  • Future energy source should be abundant and must last at least for 700 years.
  • It should have least environmental impacts.
  • It should meet with increasing needs of energy and should be least pollutant.
  • Future energy source should have least harmful wastes and which could easily be disposed.

B.W. writes “This is low energy because you are letting ocean currents move the seawater through the seaweed or ionized polymer that would trap uranium. Then after a few months you haul up the polymer or seaweed and get your uranium. this would not require milling.

Here is a paper which suggests that it is unlikely to extract uranium from sea water as it is having very low production  and not cost efficient.

The uranium concentration in seawater is 3 μg/l with an estimated 5 × 109 tonnes of uranium in the oceans, in solution as the tricarbonato complex. Any extraction process will encounter the problems attendant on this high dilution, the only feasible methods currently being ion exchange on chelating resins or sorption onto inorganic materials.

Poly(amidoxime)/poly(hydroxamic acid) chelating resin has been produced with high uranium sorption from neutral solutions containing the metal as the tricarbonato complex, and the results of a study of the behaviour of this resin towards seawater are given. High chemical and biochemical stability and fast sorption kinetics are properties of the resin which can sorb 68 per cent of the uranium present using a 24 s resin to seawater contact time. Poly(amidoxime)/poly(hydroxamic acid) fibre, prepared by the oximation of poly(acrylonitrile) fibre, is able to sorb 12.5 per cent of uranium in seawater with a 2 s contact time and has the advantage of being in a form capable of weaving into a chelating cloth. Sorbing 1.8 mg uranium per gramme fibre per 10 days, the cloth can be produced as an endless belt, for a continuous process for uranium extraction. A theoretical model indicates that uranium production could be possible at 6 tonnes per annum.

It is clear that 6 tonnes of uranium production doesn’t make a significant consensus. Though I admit that there is copious amount of  Uranium in nature but my contention is that it acquires technologies which are not technically feasible with current technologies or so costly.

Nuclear wastes

The second major problem which is involved with Uranium as a future energy source are nuclear wastes. Go through this link. In this article B.W. commented that we are going to become type I civilization in Kardashev scale which would manipulate 10^16 watts.

Here is another research paper

The high priority assigned by the Federal government to the early development and commercial deployment of the Liquid Metal Fast Breeder Reactor (LMFBR) is attributed by some to the supposition that, without the breeder, a supply-price squeeze on uranium will soon materialize. The present paper examines this supposition by considering the technology and economics of uranium utilization in nonbreeder reactors, in the context of available information about uranium resources at various prices and projections of the growth of nuclear power through 2020. Reactor characteristics, cost sensitivities, and estimates of uranium resources used here are based largely on publications of the U.S. Atomic Energy Commission. The results show that existing reactor technologies — light-water reactors (LWRs), high temperature gas reactors (HTGRs), or a mix of these — could meet even the most enthusiastic projections of the expansion of nuclear generation through 2020 from presently known domestic uranium supplies, exploitable at $50 per pound of U_3O_8 or less. The increment in electricity costs that arises from increasing uranium prices in the absence of commercial breeder reactors is about 1 mill/kwhe in 2000 and about 2 mills/kwhe in 2020 in the worst case (very high growth, no HTGRs), and signficantly less in more plausible cases. In the prospective of the probable costs of the alternatives, these increments are modest; for example, the breeder’s greater insensitivity to the cost of uranium ore could easily be cancelled out if capital costs for the LMFBR prove higher than early estimates.

I’ll present more issues in my upcoming articles  with keen  analytical observing eyes.

Is It Real Life? Is Intelligence Real?

Some people would object to the attempt, in both AI and Alife, to ignore the differences between the natural and the artificial, or between physically embodied systems and systems simulated entirely in software.
They would claim that the attempt to find common general principles linking the natural and the artificial is misguided, because the artificially produced or evolved will not be the REAL thing, or perhaps the software-only versions will not be the REAL thing. It won’t be REAL life, REAL intelligence, REAL perception, REAL planning, REAL consciousness. Likewise, some claim that a system inhabiting only a virtual machine environment implemented in software cannot be an example of REAL life, REAL intelligence, REAL perception, REAL consciousness, etc.

The problem with this sort of objection is that it is based on dichotomous thinking. The assumption is that we have concepts like “alive”, “conscious” “intelligent” which divide things up into two classes, those which the concept applies to and those which it doesn’t apply to. So the assumption is that everything either is alive or it isn’t.

This is obviously silly with concepts like “house”. A house is something that has a collection of features that make it a useful enclosure for its occupants. But there’s no well defined subset of those features that form a minimal requirement for something to be a house, so that everything that has those features is a house, and everything else isn’t. Rather, “house” is a cluster concept. It corresponds to a cluster of features which in various combinations can make something a house, but with no well defined boundary between the cases that are houses and those that are not, even if there are clear examples of both.

Maybe under some conditions you’d regard a rectangular sheet of metal supported by four poles as a house, maybe not. Maybe under some conditions you would call Buckingham palace a house, maybe not. But arguing about over whether something is a REAL house if it doesn’t have any walls, or any doors, or if it is as big and complex as a palace, is just silly.

The important thing is not to draw boundaries, but to understand that there is a large variety of cases with different combinations of features. We can study the implications of the presence or absence of various features, without worrying whether they make something a REAL house or not. We could, if we wish, give them different names, coined for the purpose of making new distinctions that we have found useful, e.g. “wallfree-house”, “palatial-house”, etc.

The same goes for concepts like “alive”, “conscious”, etc. These are also cluster concepts, which refer in a partially mindeterminate way to collections of features which can be present or absent in different combinations.

Some subsets (e.g. the features found in a chicken, or a giraffe) definitely make something alive and other subsets (e.g. the features of a rock) definitely don’t. But there are many combinations which we have never previously encountered, and therefore our language has not needed to take decisions about whether they do or do not suffice for being

Some of those combinations are found in artificial systems. In particular, over many years AI researchers have been examining ways of implementing artificial systems with combinations of various kinds of abilities, including visual perception, auditory perception, tactile perception, motor control, learning, planning, remembering, discovering new concepts, solving mathematical problems, painting pictures, composing poems and stories, communicating with other natural or artificial systems, acquiring new goals or interests, emotional capabilities, and many more.
Arguing over whether such systems, whether they are tangible robot like entities, or software agents in virtual reality environments, REALLY are alive or not, REALLY have mental states or not, is a complete waste of time, for there can be no answer.

But we can explore the implications of having different combinations of features and, for some combinations that recur often and are of interest, we can coin new unambiguous names: alive1, alive2, alive3, conscious1, conscious2, conscious3, etc., just as, when we discovered that a chemical element such as carbon could have two isotopes we did not need to waste time arguing over which is REALLY carbon. Instead we call one carbon12 and the other carbon14 (or whatever), and then study their similarities and differences.

So instead of arguing over whether the entities studied in Alife, or in AI, are alive or conscious or intelligent, or worrying about where to draw the boundaries between those which REALLY are and those which are not, we can simply note that different more refined versions of our old indefinite concepts can be defined, with different boundaries.

Then we can explore the implications of each case, e.g. which regions of niche space it can fit, what the implications of its design are. And we can go on to explore more global processes in which such systems interact with one another and either individuallym or in groups, or across many generations, follow intricate trajectories in design space and niche space.

This replaces fruitless philosophical (or theological) debates with productive investigation. Let’s get on with the job. There’s lots to do.

Dying Civilizations And Fermi Paradox

by Alexander Popoff

What will you do if your home universe is dying? It may be possible that intelligent alien  species are out there which are facing/have faced such threat. Such highly intelligent species are termed as megacivilizations. The sentient species inhabiting our Universe, including humans (if they survive), should also leave it—if they want to make it.

The megacivilizations are monitoring, controlling, and guiding to some extent the development of organisms and intelligence in this life cycle of our Universe. It is possible that the Universe (the Earth, too) is visited periodically by some M.I. in order to take all sorts of samples, including specimen of intelligent life forms.

But why don’t these almighty creatures show up? Why don’t superior intelligences contact us in an open manner or officially? They have their good reasons for that.

Hoping for more clarity and a better solution, I consider the Fermi paradox in two aspects:

1. Why don’t we have hard evidence for intelligent life inhabiting our Universe?

2. Why don’t the megacivilizations outside of our Universe show up? They have the technological means to do that.

Debating the Fermi paradox, only the first aspect of it is usually taken into account—why don’t we have solid evidence for intelligent life inhabiting our Universe? The equal start hypothesis is a possible answer.

I often asked myself one simple question: Why are there in the Universe intelligent creatures at such a low level of evolution like humans, considering first, the enormous time since the beginning of the All—not only these humble 13.7 billion years since the origin of our Universe, but the countless time before that; and second, taking into account the incredible vastness of the Being—our Universe is only one of numerous worlds? It seems obvious to us that evolution should produce much higher intelligence during this immense time period.

Humans patently aren’t the pinnacle of development of all matter, life, intelligence, or whatever. The anthropic principle is a very self-misleading hypothesis.

Anthropocentric ideas regard humans as a central fact of the Universe and assume that Homo sapiens are the final aim and end of the All. It views and interprets everything in terms of human values and experience.

The simplest form of the anthropic principle claims that God created the Universe for us, humans; however, some religions and lores accept that there are many worlds in the All, inhabited by various creatures; hence, they reject the anthropocentric approach when interpreting universal principles.

The anthropic cosmological principle states that the natural laws, constants, and basic structures of our Universe are not completely arbitrary—instead, they are constrained by requirements that allow the existence of humans.

Humans are puffed up with imaginary self-importance. Most of the scholars can’t give up the term anthropic. Now, there are several anthropic principles: weak anthropic principle, strong anthropic principle, final anthropic principle, individual anthropic principle, participatory anthropic principle, etc. Researchers coined the term “observer,” realizing that humans can’t be final aim and the end of the Universe; hence, adding the word anthropic to some ultimate universal principle is hugely misleading (actually totally wrong).

The dog barking and running around your backyard is an observer, too. Universe “designed” with the goal of generating and sustaining “dogs”? Personally, I doubt it.

Then the term was changed and the observer became intelligent. The intelligent observer principle (or the sapiens principle) for sure is an incorrect term, too. The ancient shepherds were intelligent observers—they could heal their sheep, knew stories, poems, and myths, had their cosmology, some could read, and so on. But the Universe (that capricious, grand old lady) didn’t stop developing when reaching its final goal: generating the intelligent shepherd. Instead, it is galloping further at full speed. Obviously, intelligent observing is not enough. Maybe we should add creation, participation in creation, acquiring new knowledge? Not enough, too. Even now, humans are creating a lot of new things. What about great scientific discoveries? Presently, we are only assimilating the knowledge stored in the vector during previous evolutionary cycles of the Universe. The so-called big scientific discoveries could be actually transmitted from the vector to scientists who are prepared to understand them. At that very moment, billions of scientists in our Universe are discovering the same theories human scholars are finding out on Earth. Billions of Einsteins rediscovered the theory of relativity; actually they got it from the vector. Pretty humiliating idea! One of the many tasks of the vector is to educate us. Now the sentient beings in our Universe (biological or not) are more like (bio)robots which are created, organized, controlled, educated, etc. by the vector. You don’t like the idea? I don’t like it either, but I would prefer to accept the truth, instead of some self-misleading puffy belief about the great importance of human and alien creatures consciously exploring Nature to the full extent. I also like the notion that we are intelligent, self-governing, originative, independent, creative, and self-sustaining creatures, but is it true?

Bitter facts are better than self-misleading illusions or doctrines if one is going to explore the world. Comforting lies have nothing to do with research.

We still don’t have enough knowledge about the evolution of Matter, Life, and Intelligence to get a clear picture about the future of the Universe and can’t draw realistic scientific conclusions about ultimate or final universal principles. We can only speculate.

The most probable answer to the seemingly simple question “Why are there in the Universe intelligent creatures at such a low level of evolution like humans?” could be that Matter, Life, and Intelligence develop in cycles and are arranged in suitable agglomerates. Now we are at the low stages of such a cycle of development in our present agglomerate, the Universe. We can access only our agglomerate. The creatures inhabiting more advanced agglomerates can access the lower ones and are guiding them to some extent.

The development of our Universe is the longest evolutionary cycle we know. According to modern science, it would last up to one hundred billion years. The developing universes might be subjected to an endless series of evolutionary cycles.

What’s the point of development in cycles? Why is Mother Nature repeating itself periodically? Can we observe such cycles on Earth?

Cycles are an inevitable part of evolving Matter, Life, and Intelligence. A cycle can last the whole life of a universe, but there are also a great number of shorter cycles: year, day, biological cycles of the human body, etc. Everything (matter or living creatures) is subject to cycles. Humans are also exposed to many individual cycles, but the most important one (from an evolutionary point of view) is: birth, life in competitive environment in order for the specimens to develop as much as possible, transferring the gathered information—the genetic one through the DNA; the acquired practical and scientific knowledge through education—to the next generation, and death. Generations and universes are following the same pattern. Generation after generation humans are becoming more developed and sophisticated. Cycle after cycle the universes are producing more advanced intelligences.

The megacivilizations from previous evolutionary cycles are superbeings in comparison to us, but they still can’t change the established global process of creation and evolution. Our Universe is like some kind of gigantic womb reproducing megaintelligences. Maybe in the future there will be other means of reproduction and evolution of intelligent beings, but the main principles will remain the same, at least for a very long time.

But why don’t the megacivilizations show up? Because they want to get offspring from our Universe—healthy, intelligent, competitive. The megacivilizations must play their competitive games, too: among them there should also be winners and losers. Leaving their dying universes, the megaintelligences don’t enter some sort of paradise, but another completive world which shows no mercy when it comes to evolution.

World War I, World War II, the Cold War, and many wars back in time stimulated very much the development of science and technologies. There are enough studies on the subject. Believe it or not, like it or not, wars are one of the main motors of evolution and the development of sciences and technologies. They are the highest degree of competition. But we don’t like wars, no matter how productive they are. We want to live in peace and in good health as long as possible. If the mighty megacivilizations show up, we would ask them to stop the wars, which would actually reduce the level of competition. But this is against the interests of the megaintelligences, because evolution becomes much slower and the end product of the Universe—actually the offspring any M.I. is waiting for—would develop below expectations.

If the superior megacivilizations show up, we would ask them also to prolong our lives. They have the know-how: humans could live 10,000 years or longer in perfect health—no cancer, no heart attacks…there are thousands of life-threatening diseases. But on the other hand, poor health, numerous illnesses, and short life expectancy are mighty stimuli for humans to develop medicine, science, and technology which in turn accelerate evolution. There are also other similar reasons why the megacivilizations don’t openly visit in open manner civilizations like ours.

The megacivilizations are also guiding to some extent the evolution of intelligence in our Universe and in other universes, and they fully agree with what they watch on Earth and on other planets—they see numerous healthy space races developing quickly, exactly as they expect. They should be satisfied because their offspring will be more advanced compared to the newborns from the previous evolutionary cycle of the Universe. Just as we expect the next human generation to be more developed than the previous; we are making our best efforts to ensure that our kids get a better education, are more healthy, and so on, so that they will be better than us in everything.

Maybe the megacivilizations are guided as well, and there might be some sort of multilevel creation and control.

The megaintelligences are not going to save us from the benefits of competition; benefits from their point of view, that are only disasters for us. They do not intend to prolong our lives (we should take care of this) or to solve the poverty problem (another mighty stimulus), to stop the wars and crimes, and so further. The story of the Savior that all are waiting for is just a myth, giving us hope for better days. It is counterproductive, for if it becomes a reality, it would slow down evolution. But the Final Judgment is a reality which humanity will inevitably face. Not all space civilizations will get to go on existing further.

The megacivilizations are furtively guiding us, together with the vector, in a highly clandestine manner, revealing their presence to us through mythology, religions, parapsychological manifestations, etc. Their goal is: numerous intelligent species, developing as fast as possible.

Humans are still alive because they aren’t here: the extraterrestrials from our Universe, their robotic probes, and the alien microbes. In order to produce huge numbers of sound space civilizations, the vector keeps them separated by huge cosmic distances. The equal start of advanced sentient life forms in such a great developing Universe gives these species the chance for survival and progress.

When potentially dangerous manned spacecrafts, von Neumann probes, Popoff machines, or alien life forms come into the Solar System, a reliable space defense system would be a matter of life and death.

The lifesaving immune system is a complex network of interacting cells, cell products, and cell-forming tissues protecting the living body from pathogens and other foreign substances. It destroys infected and malignant cells and removes them. There is no other way for the living organism to survive. Seconds after the immune system stops working, the body begins to decompose.

In the near future, humans should begin to build up a reliable space defense system—a complex network of interacting humans, software, and machines, protecting our present habitat, the Solar System, from pathogens, quasi-alive forms, machines, and any alien life forms which could pose a threat to the terrestrial life or would change our environment. There is no other way to survive.

%d bloggers like this: