Multifunctional Carbon Nanotubes – Introduction and Applications of Multifunctional Carbon Nanotubes

This animation of a rotating carbon nanotube g...

Image via Wikipedia

Over the past several decades there has been an explosive growth in research and development related to nano materials. Among these one material, carbon Nanotubes, has led the way in terms of its fascinating structure as well as its ability to provide function-specific applications ranging from electronics, to energy and biotechnology. Carbon nanotubes (CNTs) can be viewed as carbon whiskers, which are tubules of nanometer dimensions with properties close to that of an ideal graphite fiber. Due to their distinctive structures they can be considered as matter in one-dimension (1D).

In other words, a carbon nanotube is a honeycomb lattice rolled on to itself, with diameters of the order of nanometers and lengths of up to several micrometers. Generally, two distinct types of CNTs exist depending whether the tubes are made of more than one graphene sheet (multi walled carbon nanotube, MWNT) or only one graphene sheet (single walled carbon nanotube, SWNT). For a detailed description on CNTs please refer to the article by Prof. M. Endo.

A Truly Multifunctional Material

Irrespective of the number of walls, CNTs are envisioned as new engineering materials which possess unique physical properties suitable for a variety of applications. Such properties include large mechanical strength, exotic electrical characteristics and superb chemical and thermal stability. Specifically, the development of techniques for growing carbon nanotubes in a very controlled fashion (such as aligned CNT architectures on various substrates ) as well as on a large scale, presents investigators all over the world with enhanced possibilities for applying these controlled CNTs architectures to the fields of Vacuum microelectronics, Cold-cathode flat panel displays, Field emission devices, Vertical interconnect assemblies, Gas breakdown sensors, Bio Filtration, On chip thermal management, etc.

Apart from their outstanding structural integrity as well as chemical stability, the property that makes carbon nanotubes truly multifunctional in nature is the fact that carbon nanotubes have lot to offer (literally) in terms of specific surface area. Depending on the type of CNTs the specific surface areas may range from 50 m2/gm to several hundreds of m2/gm and with appropriate purification processes the specific surface areas can be increased up to ~1000 m2/gm.

Extensive theoretical and experimental studies have shown that the presence of large specific surface areas is accompanied by the availability of different adsorption sites on the nanotubes. For example, In CNTs produced using catalyst assisted chemical vapor deposition the adsorption occurs only on the outer surface of the curved cylindrical wall of the CNTs. This is because the production process of the CNTs using metal catalysts usually leads to nanotubes with closed ends, thereby restricting the access of the hollow interior space of the tube.

However, there are simple procedures (mild chemical or thermal treatments) which can remove the end caps of the MWNTs thereby presenting the possibility of another adsorption site (inside the tube) in MWNTs as schematically shown in Figure 1. Similarly, the large scale production process of SWNTs lead to the bundling of the SWNTs. Due to this bundling effect, SWNT bundles provide various high energy binding sites (for example grooves, Figure 1.). What this means is then that large surfaces are available in small volume and these surfaces can interact with other species or can be tailored and functionalized.

Figure 1: Possible binding sites available for adsorption on (left) MWNTs and (right) SWNTs surfaces.

Our group’s own research interests are directed into utilizing these materials in different applications related to energy and the environment, where their high specific surfaces areas play a crucial role. Two of such energy related applications are discussed below:

  • CNT Based Electrochemical Double Layer Capacitors
  • CNT Based catalyst support

CNT Based Electrochemical Double Layer Capacitors

Electrochemical Double Layer Capacitors (EDLC’s: Also referred to as Super Capacitors and Ultra-Capacitors) are envisioned as devices that will have the capability of providing high energy density as well as high power density. With extremely high life-span and charge-discharge cycle capabilities EDLC’s are finding versatile applications in the military, space, transportation, telecommunications and nanoelectronics industries.

An EDLC contains two non reactive porous plates (electrodes or collectors with extremely high specific surface area), separated by a porous membrane and immersed in an electrolyte. Various studies have shown the suitability of CNTs as EDLC electrodes. However, proper integration of CNTs with collector electrodes in EDLCs are needed for minimizing the overall device resistance in order to enhance the performance of CNT based supercapacitors. A strategy for achieving this could be growing CNTs directly on metal surfaces and using them as EDLC electrodes (Figure 2). EDLC electrodes with very low equivalent series resistance (ESR) and high power densities can be obtained by using such approaches.

Figure 2: (a) Artist rendition of EDLC formed by aligned MWNT grown directly on metals (b) An electrochemical impedance spectroscopy plot showing low ESR of such EDLC devices and (c) very symmetric and near rectangular cyclic voltamograms of such devices indicating impressive capacitance behavior.

CNT Based Catalyst Support

Catalysts play an important role in our existence today. Catalysts are small particles (~ 10-9 meter, or nanometer) which due to their unique surface properties can enhance important chemical reactions leading to useful products. In any kind of catalytic process, the catalysts are dispersed on high surface area materials, known as the catalyst support. The support provides mechanical strength to the catalysts in addition to enhance the specific catalytic surface and enhancing the reaction rates. CNTs, due to their high specific surface areas, outstanding mechanical as well as thermal properties and chemically stability can potentially become the material of choice for catalyst support in a variety of catalyzed chemical reactions.

We are presently exploring the idea of using CNTs as catalyst support in the Fischer Tropsch (FT) synthesis process. The FT reaction can convert a mixture of carbon monoxide and hydrogen in to a wide range of straight chained and branched olefins and paraffins and oxygenates (leading to the production of high quality synthetic fuels). Our preliminary FT synthesis experiments on CNT supported FT catalysts (generally cobalt and iron) shows that the conversion of CO and H2 obtained with FT catalyst loaded CNTs is orders of magnitude higher than that obtained with conventional FT catalysts (Figure 3), indicating that CNTs offer a new breed of non-oxide based catalyst supports with superior performance for FT synthesis.

Figure 3:CNT paper used as catalyst support for FT synthesis and comparison of conversion ratio’s of Co and H2

So far, CNT research has provided substantial excitement, and novel possibilities in developing applications based on interdisciplinary nanotechnology. The area of large scale growth of CNTs is quiet mature now and hence it could be expected that several solid large volume applications will emerge in the near future.

[Source: Azonano]

Hydrotropi: A Hope for Space Colonization

A while back , I’ve published an article about explaining need to colonize mars. Last year, NASA successfully developed “alternative crops” that can be grown in space like (zero gravity) conditions.

Plants are fundamental to life on Earth, converting light and carbon dioxide into food and oxygen. Plant growth may be an important part of human survival in exploring space, as well. Gardening in space has been part of the International Space Station from the beginning — whether peas grown in the Lada greenhouse or experiments in the Biomass Production System. The space station offers unique opportunities to study plant growth and gravity, something that cannot be done on Earth.

The latest experiment that has astronauts putting their green thumbs to the test is Hydrotropism and Auxin-Inducible Gene expression in Roots Grown Under Microgravity Conditions, known as HydroTropi. Operations were conducted October 18-21, 2010, HydroTropi is a Japan Aerospace Exploration Agency (JAXA)-run study that looks at directional root growth. In microgravity, roots grow latterly or sideways, instead of up and down like they do under Earth’s gravitational forces.

[HydroTropi: Overview

Experiment/Payload Overview

Information provided courtesy of the Japan Aerospace and Exploration Agency (JAXA). Brief Summary

Hydrotropism and Auxin-Inducible Gene expression in Roots Grown Under Microgravity Conditions (HydroTropi) determines whether hydrotropic response can be used for the control of cucumber, Cucumis sativus root growth orientation in microgravity.

Principal Investigator

  • Hideyuki Takahashi, Ph.D., Tohoku University, Sendai, Japan
  • Co-Investigator(s)/Collaborator(s)

  • Nobuharu Fujii, Ph.D., Tohoku University, Sendai, Japan
  • Yutaka Miyazawa, Ph.D., Tohoku University, Sendai, Japan
  • Sponsoring Space Agency

    Japan Aerospace Exploration Agency (JAXA)

    Supporting Organization:

    Information Pending

    Expeditions Assigned

    |25/26|

    Previous ISS Missions

    Increment 23/24 will be the first mission for HydroTropi operations

    Experiment/Payload Description

    Research Summary

    The Hydrotropism and Auxin?Inducible Gene expression in Roots Grown Under Microgravity Conditions (HydroTropi) experiment has three specific aims:

    • First, it demonstrates that gravitropism (a plant’s ability to change its direction of growth in response to gravity) interferes with hydrotropism (a directional growth response in which the direction is determined by a stimuli in water concentration).
    • Second, it clarifies the differential auxin response that occurs during the respective tropisms (reaction of a plant to a stimulus), by investigating the auxin (compound regulating the growth of plants) inducible gene expression.
    • Third, it shows whether hydrotropism can be used in controlling root growth orientation in microgravity.

    Description

    Hydrotropism and Auxin?Inducible Gene expression in Roots Grown Under Microgravity Conditions (HydroTropi) will propose to use the microgravity environment in space to separate hydrotropism from gravitropism and to dissect respective mechanisms in cucumber roots.]

    Cucumber roots grew laterally in space following 70 hours in microgravity on STS 95. (JAXA)

    Using cucumber plants (scientific name Cucumis sativus), investigators look to determine whether hydrotropic — plant root orientation due to water—response can control the direction of root growth in microgravity. To perform the HydroTropi experiment, astronauts transport the cucumber seeds from Earth to the space station and then coax them into growth. The seeds, which reside in Hydrotropism chambers, undergo 18 hours of incubation in a Cell Biology Experiment Facility orCBEF. Then the crewmembers activate the seeds with water or a saturated salt solution, followed by a second application of water 4 to 5 hours later. The crew harvests the cucumber seedlings and preserves them using fixation tubes called Kenney Space Center Fixation Tubes or KFTs, which then store in one of the station MELFI freezers to await return to Earth.
    The results from HydroTropi, which returns to Earth on STS-133, will help investigators to better understand how plants grow and develop at a molecular level. The experiment will demonstrate a plant’s ability to change growth direction in response to gravity (gravitropism) vs. directional growth in response to water (hydrotropism). By looking at the reaction of the plants to the stimuli and the resulting response of differential auxin — the compound regulating the growth of plants — investigators will learn about plants inducible gene expression. In space, investigators hope HydroTropi will show them how to control directional root growth by using the hydrotropism stimulus; this knowledge may also lead to significant advancements in agriculture production on Earth.

    [Credit: NASA]

    Growing Crops on Other Planets

    Science fiction lovers aren’t the only ones captivated by the possibility of colonizing another planet. Scientists are engaging in numerous research projects that focus on determining how habitable other planets are for life. Mars, for example, is revealing more and more evidence that it probably once had liquid water on its surface, and could one day become a home away from home for humans. 

    “The spur of colonizing new lands is intrinsic in man,” said Giacomo Certini, a researcher at the Department of Plant, Soil and Environmental Science (DiPSA) at the University of Florence, Italy. “Hence expanding our horizon to other worlds must not be judged strange at all. Moving people and producing food there could be necessary in the future.” 

    Humans traveling to Mars, to visit or to colonize, will likely have to make use of resources on the planet rather than take everything they need with them on a spaceship. This means farming their own food on a planet that has a very different ecosystem than Earth’s. Certini and his colleague Riccardo Scalenghe from the University of Palermo, Italy, recently published a study in Planetary and Space Science that makes some encouraging claims. They say the surfaces of Venus, Mars and the Moon appear suitable for agriculture. 

    Defining Soil 

    The surface of Venus, generated here using data from NASA’s Magellan mission, undergoes resurfacing through weathering processes such as volcanic activity, meteorite impacts and wind erosion. Credit: NASA

    Before deciding how planetary soils could be used, the two scientists had to first explore whether the surfaces of the planetary bodies can be defined as true soil. 

    “Apart from any philosophical consideration about this matter, definitely assessing that the surface of other planets is soil implies that it ‘behaves’ as a soil,” said Certini. “The knowledge we accumulated during more than a century of soil science on Earth is available to better investigate the history and the potential of the skin of our planetary neighbors.” 

    One of the first obstacles in examining planetary surfaces and their usefulness in space exploration is to develop a definition of soil, which has been a topic of much debate. 

    “The lack of a unique definition of ‘soil,’ universally accepted, exhaustive, and (one) that clearly states what is the boundary between soil and non-soil makes it difficult to decide what variables must be taken into account for determining if extraterrestrial surfaces are actually soils,” Certini said. 

    At the proceedings of the 19th World Congress of Soil Sciences held in Brisbane, Australia, in August, Donald Johnson and Diana Johnson suggested a “universal definition of soil.” They defined soil as “substrate at or near the surface of Earth and similar bodies altered by biological, chemical, and/or physical agents and processes.” 

    The surface of the Moon is covered by regolith over a layer of solid rock. Credit: NASA

    On Earth, five factors work together in the formation of soil: the parent rock, climate, topography, time and biota (or the organisms in a region such as its flora and fauna). It is this last factor that is still a subject of debate among scientists. A common, summarized definition for soil is a medium that enables plant growth. However, that definition implies that soil can only exist in the presence of biota. Certini argues that soil is material that holds information about its environmental history, and that the presence of life is not a necessity. 

    “Most scientists think that biota is necessary to produce soil,” Certini said. “Other scientists, me included, stress the fact that important parts of our own planet, such as the Dry Valleys of Antarctica or the Atacama Desert of Chile, have virtually life-free soils. They demonstrate that soil formation does not require biota.” 

    The researchers of this study contend that classifying a material as soil depends primarily on weathering. According to them, a soil is any weathered veneer of a planetary surface that retains information about its climatic and geochemical history. 

    On Venus, Mars and the Moon, weathering occurs in different ways. Venus has a dense atmosphere at a pressure that is 91 times the pressure found at sea level on Earth and composed mainly of carbon dioxide and sulphuric acid droplets with some small amounts of water and oxygen. The researchers predict that weathering on Venus could be caused by thermal process or corrosion carried out by the atmosphere, volcanic eruptions, impacts of large meteorites and wind erosion. 

    Using the method of aeroponics, space travelers will be able to grow their own food without soil and using very little water. Credit: NASA

    Mars is currently dominated by physical weathering caused by meteorite impacts and thermal variations rather than chemical processes. According to Certini, there is no active volcanism that affects the martian surface but the temperature difference between the two hemispheres causes strong winds. Certini also said that the reddish hue of the planet’s landscape, which is a result of rusting iron minerals, is indicative of chemical weathering in the past. 

    On the Moon, a layer of solid rock is covered by a layer of loose debris. The weathering processes seen on the Moon include changes created by meteorite impacts, deposition and chemical interactions caused by solar wind, which interacts with the surface directly. 

    Some scientists, however, feel that weathering alone isn’t enough and that the presence of life is an intrinsic part of any soil. 

    “The living component of soil is part of its unalienable nature, as is its ability to sustain plant life due to a combination of two major components: soil organic matter and plant nutrients,” said Ellen Graber, researcher at the Institute of Soil, Water and Environmental Sciences at The Volcani Center of Israel’s Agricultural Research Organization. 

    One of the primary uses of soil on another planet would be to use it for agriculture—to grow food and sustain any populations that may one day live on that planet. Some scientists, however, are questioning whether soil is really a necessary condition for space farming. 

    Soilless Farming – Not Science Fiction 

    With the Earth’s increasing population and limited resources, scientists are searching for habitable environments on places such as Mars, Venus and the Moon as potential sites for future human colonies. Credit: NASA

    Growing plants without any soil may conjure up images from a Star Trek movie, but it’s hardly science fiction. Aeroponics, as one soilless cultivation process is called, grows plants in an air or mist environment with no soil and very little water. Scientists have been experimenting with the method since the early 1940s, and aeroponics systems have been in use on a commercial basis since 1983. 

    “Who says that soil is a precondition for agriculture?” asked Graber. “There are two major preconditions for agriculture, the first being water and the second being plant nutrients. Modern agriculture makes extensive use of ‘soilless growing media,’ which can include many varied solid substrates.” 

    In 1997, NASA teamed up with AgriHouse and BioServe Space Technologies to design an experiment to test a soilless plant-growth system on board the Mir Space Station. NASA was particularly interested in this technology because of its low water requirement. Using this method to grow plants in space would reduce the amount of water that needs to be carried during a flight, which in turn decreases the payload. Aeroponically-grown crops also can be a source of oxygen and drinking water for space crews. 

    “I would suspect that if and when humankind reaches the stage of settling another planet or the Moon, the techniques for establishing soilless culture there will be well advanced,” Graber predicted. 

    Soil: A Key to the Past and the Future 

    The Mars Phoenix mission dug into the soil of Mars to see what might be hidden just beneath the surface. Credit:NASA/JPL-Caltech/University of Arizona/Texas A&M University

    The surface and soil of a planetary body holds important clues about its habitability, both in its past and in its future. For example, examining soil features have helped scientists show that early Mars was probably wetter and warmer than it is currently. 

    “Studying soils on our celestial neighbors means to individuate the sequence of environmental conditions that imposed the present characteristics to soils, thus helping reconstruct the general history of those bodies,” Certini said. 

    In 2008, NASA’s Phoenix Mars Lander performed the first wet chemistry experiment using martian soil. Scientists who analyzed the data said the Red Planet appears to have environments more appropriate for sustaining life than was expected, environments that could one day allow human visitors to grow crops. 

    “This is more evidence for water because salts are there,” said Phoenix co-investigator Sam Kounaves of Tufts University in a press release issued after the experiment. “We also found a reasonable number of nutrients, or chemicals needed by life as we know it.” 

    Researchers found traces of magnesium, sodium, potassium and chloride, and the data also revealed that the soil was alkaline, a finding that challenged a popular belief that the martian surface was acidic. 

    This type of information, obtained through soil analyses, becomes important in looking toward the future to determine which planet would be the best candidate for sustaining human colonies.

    [Credit: Astrobiology Magazine]

    Tetris Therapy: Game may Ease Traumatic Flashbacks

    By Charles Q. Choi

    The video game Tetris may quell flashbacks of traumatic events in a way that other kinds of games can’t, researchers have found. The curious effect might have to do with how the shapes in the game compete with images of a traumatic scene when it comes to getting stored in one’s memory. Tetris, one of themost popular video games of all time, involves moving and rotating shapes falling down a a playing field with the aim of creating horizontal lines of blocks without gaps. In earlier work, scientists at Oxford University in England found that playing Tetris after traumatic events could reduce flashbacks in healthy volunteers. The hope of this research is to reduce the painful memories linked with post-traumatic stress disorder(PTSD).

    Tetris therapy

    To see if this effect was found only in Tetris or with other games as well, the researchers compared Tetris with Pub Quiz Machine 2008, a word-based quiz game. The investigators began by showing volunteers a gruesome film with traumatic images of injury and death, such as fatal traffic accidents and graphic scenes of human surgery. After waiting a half-hour, in the first experiment, 20 volunteers played Tetris for 10 minutes, 20 played Pub Quiz and 20 did nothing. By examining diaries the volunteers kept for a week afterward to record any instances of flashbacks to the film, they found Tetris significantly reduced flashbacks while Pub Quiz significantly increased them. In a second experiment, the wait was extended to four hours, with 25 volunteers in each group and matching results. According to researcher Emily Holmes:

    Our latest findings suggest Tetris is still effective as long as itis played within a four-hour window after viewing a stressful film. Whilst playing Tetris can reduce flashback-type memories without wiping out the ability to make sense of the event, we have shown that not all computer games have this beneficial effect — some may even have a detrimental effect on how people deal with traumatic memories.

    The split mind

    To explain these unusual results, think of the mind as having two separate channels of thought. One is sensory, dealing with perceptions of the world as experienced through sight, sound, smell, taste and touch, while the other is conceptual, responsible for combining sensory details in a meaningful way. These channels generally work in harmony with each other — for instance, we might see and hear someone talk and quickly comprehend what that person is saying. However, after traumatic events, the sensory channel is thought to overwhelm the conceptual one. As such, we are less likely to, for example, remember a high-speed traffic accident as a story than as a flash of headlights and the noise of a crash. These sensory details then intrude repeatedly in a victim’s mind in the form of flashbacks, often causing great distress.

    Past research suggested there is a timeframe of up to six hours after a trauma in which one can interfere with the way traumatic memories are formed in the mind. During this window of opportunity, certain tasks can compete with the same mental channels needed to form those memories, in much the same way it can prove hard to hold a conversation while solving a math problem. As such, the Oxford team focused on Tetris, a task that demands visual attention and visual memory. They suggest the game achieves its beneficial effects regarding flashbacks by competing with traumatic detailson the sensory channel. On the other hand, Pub Quiz might compete with the conceptual channel, reinforcing sensory details of traumatic events.

    These laboratory experiments can help us understand how unwanted flashback memories may be formed. This can help us better understand this fundamental aspect of human memory. It may also lead us to think about new ways to develop preventative treatments after trauma.

    However, she cautioned that thisis early stage laboratory research, and that further work is needed to move this into clinical situations.

    [Credit: LiveScience]

    Stepping into Transhuman Stage: Connecting Human Brains through Wireless Sensor Network(WSN)

    Telepathy is probably, I think, most alluring and widely spread concept and often portrayed by our sci fi media which sometimes sail concepts into abyss of highly impertinent imagination. Well, natural telepathy may not be possible for all but artificial telepathy would bring this capability to you. You will be able to communicate through your brain by this amazing technology of artificial telepathy network-Wireless Sensor Network.

    This future Wireless Sensor Network (WSN) based communication technology direction relates to a system and method for enabling human beings to communicate with one another by monitoring brain activity. In particular, this relates to such a system and method where brain activity of a particular individual is monitored and transmitted in a wireless manner (e.g. via satellite)from the location of the individual to a remote location so that the brain activity can be computer analyzed at the remote location there by enabling the computer and/or individuals at the remote location to determine what the monitored individual was thinking or wishing to communicate. In certain embodiments this future WSN based communication technology direction would relate to the analysis of brain waves or brain activity, and/or to the remote firing of select brain nodes in order to produce a predetermined effect on an individual.

    Generally speaking, this future WSN Based communication technology direction fulfills the above described needs in the art by providing a method of communicating comprising the steps of:
    – providing a first human being ata first location;
    – providing a computer at a second location that is remote from the first location;
    – providing a satellite;
    – providing at least one sensor (preferably a plurality–e.g. tens, hundreds, or thousands, with each sensor monitoring the firing of one or more brain nodes or synapse type members) on the first human being;
    – detecting brain activity of the first human being using at least one sensor, and transmitting the detected brain activity to the satellite as a signal including brain activity information;
    – the satellite sending a signal including the brain activity information to the second location;
    – a receiver at the second location receiving the signal from the satellite and forwarding the brain activity information in the signal to the computer;
    – comparing the received brain activity information of the first human being with normalized or averaged brain activity information relating to the first human being from memory; and
    – determining whether the first human being was attempting to communicate particular words, phrases or thoughts, based upon the comparing of the received brain activity information to the information from memory.

    In certain terms, the WSN based future communication technology direction includes the following steps:
    -asking the first human being a plurality of questions and recording brain activity of the first human being responsive to the plurality of questions in the process of developing said normalized or averaged brain activity information relating to the first human being stored in the memory.
    -A database in a memory may include, for each of a plurality (e.g. one hundred or thousands) of individuals, a number of prerecorded files each corresponding to a particular thought, attempt to communicate a word, attempt to attempt to communicate a phrase or thought, or mental state.
    -Measured brain activity of a given individual may be compared to files from that database of that individual to determine what the individual is attempting to communicate or what type of mental state the individual is in.

    It is another object of this future WSN based communication technology direction to communicate monitored brain activity from one location to another in a wireless manner, such as by IR, RF, or satellite. In certain terms of this WSN based future communication technology direction, the computer located at the remote location includes a neural network suitably programmed in accordance with known neural network techniques, for the purpose of receiving the monitored brain activity signals, transforming the signals into useful forms, training and testing the neural network to distinguish particular forms and patterns of physiological activity generated in the brain of the monitored individual, and/or comparing the received monitored brain activity information with stored information.
    [Editor’s note: see the images in slideshow to have a better understanding of the concept]

    This slideshow requires JavaScript.



    [Source/Ref: Wireless Sensor Network based Future of Telecom Applications By Prof. Arun Dua]

    Small Asteroid Pass Within Earth and Moon

    Chart of detected NEO's

    Image via Wikipedia

    A small asteroid will fly past Earth early Tuesday within the Earth-moon system. The asteroid, 2010 TD54, will have its closest approach to Earth’s surface at an altitude of about 45,000 kilometers (27,960 miles) at 6:50 EDT a.m. (3:50 a.m. PDT). At that time, the asteroid will be over southeastern Asia in the vicinity of Singapore. During its flyby, Asteroid 2010 TD54 has zero probability of impacting Earth. A telescope of the NASA-sponsored Catalina Sky Survey north of Tucson, Arizona discovered 2010 TD54 on Oct. 9 at (12:55 a.m. PDT) during routine monitoring of the skies.

    2010 TD54 is estimated to be about 5 to 10 meters (16 to 33 feet) wide. Due to its small size, the asteroid would require a telescope of moderate size to be viewed. A five-meter-sized near-Earth asteroid from the undiscovered population of about 30 million would be expected to pass daily within a lunar distance, and one might strike Earth’s atmosphere about every 2 years on average. If an asteroid of the size of 2010 TD54 were to enter Earth’s atmosphere, it would be expected to burn up high in the atmosphere and cause no damage to Earth’s surface.

    The distance used on the Near Earth Object page is always the calculated distance from the center of Earth. The distance stated for 2010 TD54 is 52,000 kilometers (32,000 miles). To get the distance it will pass from Earth’s surface you need to subtract the distance from the center to the surface (which varies over the planet), or about one Earth radii. That puts the pass distance at about 45,500 kilometers (28,000 miles) above the planet.

    NASA detects, tracks and characterizes asteroids and comets passing close to Earth using both ground- and space-based telescopes. The Near-Earth Object Observations Program, commonly called “Spaceguard,” discovers these objects, characterizes a subset of them, and plots their orbits to determine if any could be potentially hazardous to our planet.

     

    A newly discovered car-sized asteroid will fly past Earth early Tuesday. The asteroid, 2010 TD54, will make its closest approach to Earth at 6:51 EDT a.m. (3:51 a.m. PDT). Image credit: NASA/JPL

     

     

    NASA Thruster Test Aids Future Robotic Lander’s Ability to Land Safely

    NASA’s Marshall Space Flight Center in Huntsville, Ala., collaborated with NASA’s White Sands Test Facility in Las Cruces, N.M., and Pratt & Whitney Rocketdyne in Canoga Park, Calif., to successfully complete a series of thruster tests at the White Sands test facility. The test will aid in maneuvering and landing the next generation of robotic lunar landers that could be used to explore the moon’s surface and other airless celestial bodies.

    The Robotic Lunar Lander Development Project at the Marshall Center performed a series of hot-fire tests on two high thrust-to-weight thrusters – a 100-pound-class for lunar descent and a 5-pound-class for attitude control. The team used a lunar mission profile during the test of the miniaturized thrusters to assess the capability of these thruster technologies for possible use on future NASA spacecraft.

    The test program fully accomplished its objectives, including evaluation of combustion stability, engine efficiency, and the ability of the thruster to perform the mission profile and a long-duration, steady-state burn at full power. The test results will allow the Robotic Lander Project to move forward with robotic lander designs using advanced propulsion technology.

    The test articles are part of the Divert Attitude Control System, or DACS, developed by the U.S. Missile Defense Agency of the Department of Defense. The control system provides two kinds of propulsion — one for control and the other for maneuvering. The Attitude Control System thrusters provide roll, pitch and yaw control. These small thruster types were chosen to meet the golf-cart-size lander’s requirement for light-weight, compact propulsion components to aid in reducing overall spacecraft mass and mission cost by leveraging an existing government resource.

    The Missile Defense Agency heritage thrusters were originally used for short-duration flights and had not been qualified for space missions, so our engineers tested them to assess their capability for long-duration burns and to evaluate their performance and combustion behavior. The thrusters are a first step in reducing propulsion technology risks for a lander mission. The results will be instrumental in developing future plans associated with the lander’s propulsion system design.

     

    During tests of the five-pound thruster, the Divert Attitude Control System thruster fired under vacuum conditions to simulate operation in a space environment. The tests mimicked the lander mission profile and operation scenarios. Image Credit: NASA/MSFC

     

     

    During tests of the 100-pound thruster, the Divert Attitude Control System thruster fired under vacuum conditions to simulate operation in a space environment. The tests mimicked the lander mission profile and operation scenarios. The test included several trajectory correction maneuvers during the cruise phase; nutation control burns to maintain spacecraft orientation; thruster vector correction during the solid motor braking burn; and a terminal descent burn on approach to the lunar surface.

    The objective for the five -pound-class thruster test was similar to the 100-pound thruster test with an additional emphasis on the thruster heating assessment due to the long-duration mission profile and operation with MMH/MON-25 — monomethylhydrazine (MMH) fuel and a nitrogen tetroxide (75 percent)/nitrogen oxide (25 percent) (MON-25) oxidizer.

    A standard propellant system for spacecraft is the MMH/MON-3 propellant system — containing 3 percent nitric oxide. An alternate propellant system, MMH/MON-25, contains 25 percent nitric oxide. With its chemical composition, it has a much lower freezing point than MON-3, making it an attractive alternative for spacecraft with its thermal benefits and resulting savings in heater power. Because the MMH/MON-25 propellant system has never been used in space, these tests allowed engineers to benchmark the test against the MMH/MON-3 propellant system.

    The lower freezing point could save considerable heater power for the spacecraft and increase thermal margin for the entire propulsion system. These tests showed stable combustion in all scenarios and favorable temperature results.

    [Image Credit: NASA]

    MAVEN Mission to Investigate Martian Atmosphere Mystery

    The Red Planet bleeds. Not blood, but its atmosphere, slowly trickling away to space. The culprit is our sun, which is using its own breath, the solar wind, and its radiation to rob Mars of its air. The crime may have condemned the planet’s surface, once apparently promising for life, to a cold and sterile existence.

    Features on Mars resembling dry riverbeds, and the discovery of minerals that form in the presence of water, indicate that Mars once had a thicker atmosphere and was warm enough for liquid water to flow on the surface. However, somehow that thick atmosphere got lost in space. It appears Mars has been cold and dry for billions of years, with an atmosphere so thin, any liquid water on the surface quickly boils away while the sun’s ultraviolet radiation scours the ground. Such harsh conditions are the end of the road for known forms of life. Although it’s possible that martian life went underground, where liquid water may still exist and radiation can’t reach.

    The lead suspect for the theft is the sun, and its favorite M.O. may be the solar wind. All planets in our solar system are constantly blasted by the solar wind, a thin stream of electrically charged gas that continuously blows from the sun’s surface into space. On Earth, our planet’s global magnetic field shields our atmosphere by diverting most of the solar wind around it. The solar wind’s electrically charged particles, ions and electrons, have difficulty crossing magnetic fields. Mars can’t protect itself from the solar wind because it no longer has a shield, the planet’s global magnetic field is dead.

    Mars lost its global magnetic field in its youth billions of years ago. Once its planet-wide magnetic field disappeared, Mars’ atmosphere was exposed to the solar wind and most of it could have been gradually stripped away. “Fossil” magnetic fields remaining in ancient surfaces and other local areas on Mars don’t provide enough coverage to shield much of the atmosphere from the solar wind.

    Although the solar wind might be the primary method, like an accomplished burglar, the sun’s emissions can steal the martian atmosphere in many ways. However, most follow a basic M.O., the solar wind and the sun’s ultraviolet radiation turns the uncharged atoms and molecules in Mars’ upper atmosphere into electrically charged particles (ions). Once electrically charged, electric fields generated by the solar wind carry them away. The electric field is produced by the motion of the charged, electrically conducting solar wind across the interplanetary, solar-produced magnetic field, the same dynamic generators use to produce electrical power.

    An exception to this dominant M.O. are atoms and molecules that have enough speed from solar heating to simply run away, they remain electrically neutral, but become hot enough to escape Mars’ gravity. Also, solar extreme ultraviolet radiation can be absorbed by molecules, breaking them into their constituent atoms and giving each atom enough energy that it might be able to escape from the planet. There are other suspects. Mars has more than 20 ancient craters larger than 600 miles across, scars from giant impacts by asteroids the size of small moons. This bombardment could have blasted large amounts of the martian atmosphere into space. However, huge martian volcanoes that erupted after the impacts, like Olympus Mons, could have replenished the martian atmosphere by venting massive amounts of gas from the planet’s interior.

    MAVEN Orbit

    It’s possible that the hijacked martian air was an organized crime, with both impacts and the solar wind contributing. Without the protection of its magnetic shield, any replacement martian atmosphere that may have issued from volcanic eruptions eventually would also have been stripped away by the solar wind.  Earlier Mars spacecraft missions have caught glimpses of the heist. For example, flows of ions from Mars’ upper atmosphere have been seen by both NASA’s Mars Global Surveyor and the European Space Agency’s Mars Express spacecraft.

    Previous observations gave us ‘proof of the crime’ but only provided tantalizing hints at how the sun pulls it off — the various ways Mars can lose its atmosphere to solar activity,” said Joseph Grebowsky of NASA’s Goddard Space Flight Center in Greenbelt, Md. “MAVEN will examine all known ways the sun is currently swiping the Martian atmosphere, and may discover new ones as well. It will also watch how the loss changes as solar activity changes over a year. Linking different loss rates to changes in solar activity will let us go back in time to estimate how quickly solar activity eroded the Martian atmosphere as the sun evolved.

    As the martian atmosphere thinned, the planet got drier as well, because water vapor in the atmosphere was also lost to space, and because any remaining water froze out as the temperatures dropped when the atmosphere disappeared. MAVEN can discover how much water has been lost to space by measuring hydrogen isotope ratios.

    Isotopes are heavier versions of an element. For example, deuterium is a heavy version of hydrogen. Normally, two atoms of hydrogen join to an oxygen atom to make a water molecule, but sometimes the heavy and rare, deuterium takes a hydrogen atom’s place.  On Mars, hydrogen escapes faster because it is lighter than deuterium. Since the lighter version escapes more often, over time, the martian atmosphere has less and less hydrogen compared to the amount of deuterium remaining. The martian atmosphere therefore becomes richer and richer in deuterium.

    The MAVEN team will measure the amount of hydrogen compared to the amount of deuterium in Mars’ upper atmosphere, which is the planet’s present-day hydrogen to deuterium (H/D) ratio. They will compare it to the ratio Mars had when it was young — the original H/D ratio. The original ratio is estimated from observations of the H/D ratio in comets and asteroids, which are believed to be pristine, “fossil” remnants of our solar system’s formation. Comparing the present and original H/D ratios will allow the team to calculate how much hydrogen, and therefore water, has been lost over Mars’ lifetime. For example, if the team discovers the martian atmosphere is ten times richer in deuterium today, the planet’s original quantity of water must have been at least ten times greater than that seen today.

    MAVEN will also help determine how much martian atmosphere has been lost over time by measuring the isotope ratios of other elements in the air, such as nitrogen, oxygen, and carbon. MAVEN is scheduled for launch between November 18 and December 7, 2013. If it is launched November 18, it will arrive at Mars on September 16, 2014 for its year-long mission.

    MAVEN in short:

    • The Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, scheduled for launch in late 2013, will be the first mission devoted to understanding the Martian upper atmosphere.
    • The goal of MAVEN is to determine the role that loss of atmospheric gas to space played in changing the Martian climate through time. Where did the atmosphere – and the water – go?
    • MAVEN will determine how much of the Martian atmosphere has been lost over time by measuring the current rate of escape to space and gathering enough information about the relevant processes to allow extrapolation backward in time.


    [Credit: NASA]

    Are Prosthetic Genes Next?

    A section of DNA; the sequence of the plate-li...

    Image via Wikipedia

    By Robert Holt

    There is nothing particularly thought provoking about a Teflon frying pan, but it has enormous utility when frying eggs. Teflon (the DuPont brand name for polytetrafluuoroethylene) doesn’t exist in nature. It is a polymer, a chainlike assembly of simple, repeating, fluorinated carbon molecules that was first synthesized by DuPont scientist Roy Plunkett in 1938. It is the only know substance to which a gecko cannot stick.

    DNA, or deoxyribonucleic acid, is a polymer. It is a natural polymer, comprised of a chainlike assembly of four different constituent deoxyribonucleotides, commonly referred to as DNA bases A, G, C and T. Each base has considerably more structural ornamentation than the pedestrian fluorinated carbons of Teflon, and when appropriately paired and polymerized as they are in the genome of every living thing, they form an elegant double helical structure. Genetic information, the instructions for cells to make gene products that form the structural and functional components of cells, is carried in the particular order of bases in the double helix. The order of bases in the sum total of DNA that encodes our biosphere has been laid down over evolutionary time. The order is not immutable but it is resilient, left on its own. We have become very good at reading the order of DNA bases (ie. DNA sequences) to the point where an individual human genome comprising billions of ordered bases can be read in about a week. A bacterial genome, typically containing a million or so nucleotides can be read about as fast as the DNA can be purified.

    Like Teflon, the new bacteria, Mycoplasma mycoides JCVI-syn1.0, has its origins in polymer chemistry. The genome sequence of its forbearer, Mycolplasma mycoides LC has been known for some time. When we know the order of bases in a piece of DNA we can physically reconstruct it. The procedure involves chemically modifying a base to specify its reactivity, joining it to another base to create a sequence of two, then demodifying this product in readiness for addition of the next base. It is slow, expensive and error prone and can support only a few dozen additions. The approach hasn’t changed much since the first chemical synthesis of DNA molecule, a 77 base fragment of a yeast gene, was synthesized by Har Gobind Khorana and colleagues in 1965. This being the case, the synthesis of a plethora of short DNA precursors, each a carbon copy of a particular fragment of the 1.08 million base Mycoplasma mycoides LC genome, and the assembly of these chemical precursors into the complete, accurate and functional genome of JCVI-syn1.0 is a tour de force in both polymer chemistry and synthetic biology.

    The Mycoplasma mycoides JCVI-syn1.0 genome is a prosthetic genome because like any other prosthesis, it is an artificial replacement of a missing body part, albeit an essential one in this particular case. Where will this remarkable new direction in chemical synthesis lead us? Unlike Teflon frying pans, JCVI-syn1.0 cells have zero utility. In fact, if anything they are more likely to have negative utility. It is well established that some types of mycoplasmas are infectious, and in the laboratory many a research project has been derailed by incidental mycoplasma contamination of cell cultures and considerable effort goes into making molecular biology labs mycoplasma free, to the point where an entire industry is dedicated to this problem. A google search for “Laboratory Mycoplasma Decontamination” returns 160,000 hits. Try it.

    So why would anyone want to dedicate years of R&D and tens of millions of dollars to build a mycoplasma? Why create the synthetic genome of a parasitic pathogen? To digress a little, synthetic mycoplasma is a legacy project. Initial studies begun over a decade ago focused on Mycoplasma genitalium because it was known to have one of the smallest genomes of any cellular organism – only half a million bases. It was anticipated that the small genome size, plus lack of a fortified cell wall, would make genome reconstruction and activation of more tractable. The reason to try to reconstruct and activate a synthetic genome was simply to show that it could be done. However, when the genitalium genome was built it could not be activated by transfer into a recipient mycoplasma cell, probably because its genomic composition was just too different from that of the standard recipient, Mycoplasma capricolum. To digress further, since capricolum was already known to be able to support transfer of the natural, but larger, mycoides genome the synthetic genitalium operation was scrapped and replaced by the now successful synthetic mycoides project. Although there have been claims that, being engineerable, mycoplasmas could now have commercial applications, this is is highly debatable. The fragile cell membrane that positions mycoplasmas so well as experimental organisms for microbial genomics makes them, at the same time, completely unsuitable for the heavy lifting of industry. These tasks are better suited to their more robust bacterial cousins.

    Although lacking any real world utility, Mycoplasma mycoides JCVI-syn1.0 is definitiely thought provoking. Why is this genome not just another synthetic polymer? What makes it more intriguing than polyester? At first glance it is probably clear to anyone that what sets this polymer apart is that unlike any former product of chemical synthesis it is supporting what is, undebatably, cellular life. Of course we don’t have a clue how to do the design of an organism from scratch, to pick a particular order of A, G, C an T’s that yields some startlingly new but entirely pre-designed outcome. But we are now able to copy organisms. Change them a bit. So where do things go from here? Could we create a more complex microbe? A yeast, perhaps, which is an organism with a cell structure more related to multicellular entities like ourselves than to bacteria. Various yeast strains have been sequenced, and a typical yeast genome is only about ten times larger than Mycoplasma mycoides JCVI-syn1.0. How about a fruit fly, ten times larger still, and with a well characterized genome sequence? Or, if we follow this train of thought about as far as anyone would care to, how about a person? This is the real impact of the JCVI-1. It is demonstration that once we know a genome sequence, we can rebuild the organism it encodes. Even, in principle, a person. From scratch. Using chemically synthesized DNA fragments. To be sure, the technology is nowhere near being up to the task of constructing or activating anything as large and complex as a human genome, but the point is just that. The hurdle would be a technical one. A problem of scale. For better or worse, contemplation of human existence need no longer be purely metaphysical. We should ask ourselves how we feel about that, and start to act accordingly.

    [Ref: Cosmology Magazine]

    Implications to Extraterrestrial Civilizations and Fate of Our Civilization

    Tau Ceti may be a search target for the Terres...

    Image via Wikipedia

    By J.R. Mooneyham

    As of late-2009 SETI and other searches of the heavens appear to indicate that my estimates above concerning the existence of living and technologically advanced extraterrestrial civilizations may be overly optimistic. That instead, most or all civilizations in our galaxy typically expire within their own 600 year gauntlet, comparable to our own period of 1,900-2,500 AD, thereby essentially leaving the galaxy devoid of advanced civilizations for much of its history, and over vast regions.

    The overwhelmingly lethal 600 year gauntlet of social and technological challenges described above isn’t the only possible explanation— just the most plausible one, based on the evidence available (in my own judgment, anyway).

    A few other possibilities for why our galaxy may be utterly empty of star farers (so far as we can tell), include:

    A. ALL still living and technologically advanced races have gone beyond any technology currently imaginable by mankind, so that they can defy the very laws of physics themselves (as we currently understand them) and thus show no heat signatures emanating from their greatest cosmological works, plus readily travel and communicate among the stars via means utterly undetectable to our finest instruments (essentially Godhood or magic, achieved via technology).

    Note it’s pretty unlikely that of multiple far-flung galactic civilizations, every single one has managed to reach the same god-like levels of technological prowess enabling them and their works to evade detection by our present instruments. And even if they had, technological progress is invariably unevenly distributed over space and time, and so sheer social and economic inertia (less advanced but ‘good enough’ technologies remaining in place over large regions and for many purposes) would likely result in multiple instances of alien technological works which would be far from stealthy in their emanations– and so detectable by the likes of us.

    B. Due to an astonishingly unlikely coincidence of events, virtually all civilizations galaxy-wide are only now ‘awakening’ technology-wise, so that basically everyone’s more or less as primitive today as humanity (give or take a few decades or maybe a century or so), and the only reason we haven’t seen anyone’s accidental or purposeful signal yet is that the signals are too weak and/or haven’t yet had time to reach us (The Star Trek TV show premise).

    C. Humanity and humanity alone is the only intelligent species to ever make it even this far in the galaxy, throughout its entire history since the Big Bang (the largely religious ‘we are special’ dogma; also known as the anthropomorphic concept in some circles).

    The anthropomorphic concept– that the universe was specially sculpted just to suit us– may end up being demolished by simple randomness if it turns out there are infinite sorts of universes out there, and so of course humanity appeared in the one (or many) which allowed it by chance. Likewise would frog people appear in a very slightly different class of universes somewhere. Or cockroach people. As for religious dogma– that the universe was created especially to house and nourish us and our ilk alone– that too may fall by the wayside as more people gain an understanding of just how big the universe is, and its full potential for paradigm busting. I.e., it appears that 99%+ of the universe will forever be off limits to humanity, no matter how far our technologies advance. So saying this place was built just for us is like saying a 144 room mansion was designed just for one child occupant who’ll never be allowed to leave their cradle– let alone their nursery. So it would seem supporters of supreme being dogma must explain how such wasteful extravagance universe-wide can be justified, in the face of horrific suffering here on Earth, by so many innocents– especially children– and especially under the auspices of a God claimed to be merciful and caring.

    Fate Of  Our Civilization

    Extinction. Or collapse into a permanent medieval (or worse) state of anarchy and deprivation. These appear to be the normal ends of technological civilizations in our galaxy, based on everything we know circa early 2003.

    From The rise and fall of star faring civilizations in our own galaxy:

    “The Fermi Paradox which contrasts the 100% probability of life and intelligence developing on Earth against the thunderous silence from the heavens so far (no alien signals) may be resolved by four things: One, gamma ray bursters which may have effectively prohibited the development of sentient races until only the last 200 million years; Two, the lengthy gestation period required for the emergence of intelligence (which almost requires the entire useful lifespan of a given planet, based on our own biography); Three, the need for an unusually high measure of stability in terms of climate over hundreds of millions of years (the ‘Goldilocks’ scenario, enabled by a huge natural satellite like our Moon moderating the tilt of a planet’s axis, as well as gas giants parked in proper orbits to mop up excess comets and asteroids to reduce impact frequencies for a living world); and Four, an extremely dangerous 600 year or so ‘gauntlet’ of challenges and risks most any technological society must survive to become a viable long term resident of the galaxy (i.e. getting a critical mass of population and technology off their home world, among other things). That 600 year period may be equivalent to our own span between 1900 AD and 2500 AD, wherein we’ll have to somehow dodge the bullets of cosmic impacts, nuclear, biological, and nanotechnological war, terrorism, mistakes, and accidents, as well as food or energy starvation, economic collapse, and many other threats, both natural and unnatural. So far it appears (according to SETI results and other scientific discoveries) extremely few races likely survive all these.”

    Where We Stand Today
    There’s six major guiding principles by which to defend civilization against all the worst possible threats to its future:

    One, remove or minimize the sources of all reasonable motivations to harm others from the entirety of humanity– as well as the means to carry out such harm

    Two, put into place and maintain robust structural impediments to, and socio-economic discouragements of, the domination of the many by a wealthy, powerful, or charismatic fewThree, insure the utmost education and technological empowerment possible of the average individual world citizen, wherever this does not unreasonably conflict with the other principles listed here.

    Four, work to preserve existing diversity in life on Earth and its natural environments, as well as in human behavior, culture, media, languages, and technologies, and even nourish expansion in such diversity within human works, wherever this may be accomplished with minimal conflict regarding the other principles listed here.

    Five, excesses in intellectual property protections, censorship, and secrecy all basically amount to the same thing, so far as posing threats to the robustness, prosperity, security (and even survival) of civilization is concerned. Therefore all three must be deliberately and perpetually constrained to the absolute minimum applications possible to protect humanity. In these matters it would typically be far better to err on the side of accessibility, openness, and disclosure, than the other.

    Six, seek out and implement ever better ways to document human knowledge and experience in the widest, deepest, and most accurate fashions possible for both the present and future of humanity, and offer up this recorded information freely to the global public for examination. This means the more raw the data, and the more directly sourced, the better. The more raw the data and less colored by opinions of the day, the better present and future citizens will be able to apply ever improving tools of scientific analysis to derive accurate results, and drive important decisions.

    Work faithfully and relentlessly to implement and continue the enforcement of these six principles into perpetuity (always seeking the optimal balance between them all), and you should reduce overall risk levels for civilization to that stemming from true mental illness or pure accidents.

    Robust and enlightened public health programs (among other things) can reduce the total risk of mental illness to society to negligible levels. That would leave the risk of accidents to deal with. Reducing the risks presented from various accidental events is another subject in itself, that I’ll leave to others to address.

    Yes, all of the items listed above are difficult, complex matters to achieve. But the only alternative may be extinction.

    Especially in a world where shortages of money, talent, knowledge, and time still define more of our economics and society, than anything else. Anyone working to achieve one or more of these aims immediately encounters active opposition from various quarters too. That may sound hard to believe, but look at a few examples: Cuts in military spending even in the most advanced and highly developed nations like the USA face stiff opposition from many politicians because defense cuts are apparently less popular with voters than defense budget increases– almost no matter how peaceful the world happens to be at the time. Any cuts that do somehow get passed can often only be implemented by shutting down unneeded bases or various extravagant weapons programs. But either of those considerations bring up cries of “lost jobs”, even in good times when those jobs might easily be replaced with other, less lethal ones. Weapons proliferation around the world likewise is often defended as generating jobs at home, despite the fact those weapons often end up being used by naughty allies to kill innocents in conflicts where we ourselves have little or no involvement– except for our brand name and label being prominently emblazoned on the blasted shards in various scenes of mass death and destruction. Later on we often wonder why people on the receiving end of these weapons (in the hands of others) hate us so. And sometimes the weapons we sell end up being used against our own soldiers. But still we sell and sometimes even give them away.

    Maybe aiding in the spread of democracy and free speech through the world would seem an easier goal than stopping the proliferation of weapons and weapon technologies? Sorry, but no. Indeed, here in America our track record for a long time now is behavior that says democracy and free speech is too good for lots of folks other than ourselves. You see, the ill will built up from all that weapons proliferation, plus other actions on our part, has resulted in lots of countries where we’d be tossed out on our ear if real democracies suddenly sprang up in them.

    Like what actions am I talking about? Things like manipulating elections and interfering with other attempts at legitimate changeovers in power in foreign countries. CIA involvement to prop up dictatorships with whom we have deals for things like oil or other items. Stuff like that. There’s no telling how many democratic movements we’ve helped crush or cause to be stillborn around the world in the past century. Of course, you could say we were just emulating our parent countries such as those of western europe, which did many of the same things for several centuries before we ourselves successfully rebeled against them.

    It’s almost like we don’t want any other rebellions to succeed, in order to retain our own ‘special place’ in history. But is that fair? No.

    Of course, sometimes a nation manages to overthrow its oppressors despite our opposition and dirty tricks. But when that happens, our previous sins in the conflict result in whatever new government emerges being dead-set against us. Like in Iran, with the fall of the Shah. Our interference with their internal affairs so antagonized and polarized the Iranians that one result was eventual domination of the country by an Islamic extremist movement, which managed to overthrow the US-supported Shah. And naturally, when things didn’t go our way there we froze Iran’s assets and put in place trade sanctions against them. And in response, they may be seeking to obtain their own weapons of mass destruction and supporting various terrorist actions around the world.

    Could it be we are gradually arranging our own (maybe even civilization itself’s) spectacular end with all this chicanery? For the longer we continue this type of behavior, the more difficult and scary it becomes to consider stopping it. And the worse the eventual consequences might be. After all, we’re making a lot of enemies out there. A pretty hefty chunk of the human race, in fact. If and when they all finally overthrow their US-supported dictators or oppressive ruling regimes, they might not exactly want to send us flowers.

    I vote we try to find a way out of this mess now rather than prolonging and worsening it with politics-and-economics-as-usual. Before it’s too late. Before our world too becomes one of the silent ones in the galaxy.

    Key To Space Time Engineering: Huge Magnetic Field Created

    Large magnetic fields above 150T were never produced before but this time scientists has successfully created the magnetic Field of 300T. Larger magnetic fields could be implemented into space time engineering, however this is not up to extent we need yet, provide a key to future space time engineering. Graphene, the extraordinary form of carbon that consists of a single layer of carbon atoms, has produced another in a long list of experimental surprises. In the current issue of the journal Science, a multi-institutional team of researchers headed by Michael Crommie, a faculty senior scientist in the Materials Sciences Division at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, reports the creation of pseudo-magnetic fields far stronger than the strongest magnetic fields ever sustained in a laboratory – just by putting the right kind of strain onto a patch of graphene.

    “We have shown experimentally that when graphene is stretched to form nanobubbles on a platinum substrate, electrons behave as if they were subject to magnetic fields in excess of 300 tesla, even though no magnetic field has actually been applied,” says Crommie. “This is a completely new physical effect that has no counterpart in any other condensed matter system.”

    Crommie notes that “for over 100 years people have been sticking materials into magnetic fields to see how the electrons behave, but it’s impossible to sustain tremendously strong magnetic fields in a laboratory setting.” The current record is 85 tesla for a field that lasts only thousandths of a second. When stronger fields are created, the magnets blow themselves apart.

    The ability to make electrons behave as if they were in magnetic fields of 300 tesla or more – just by stretching graphene – offers a new window on a source of important applications and fundamental scientific discoveries going back over a century. This is made possible by graphene’s electronic behavior, which is unlike any other material’s.

    [Image Details: In this scanning tunneling microscopy image of a graphene nanobubble, the hexagonal two-dimensional graphene crystal is seen distorted and stretched along three main axes. The strain creates pseudo-magnetic fields far stronger than any magnetic field ever produced in the laboratory. ]

    A carbon atom has four valence electrons; in graphene (and in graphite, a stack of graphene layers), three electrons bond in a plane with their neighbors to form a strong hexagonal pattern, like chicken-wire. The fourth electron sticks up out of the plane and is free to hop from one atom to the next. The latter pi-bond electrons act as if they have no mass at all, like photons. They can move at almost one percent of the speed of light.

    The idea that a deformation of graphene might lead to the appearance of a pseudo-magnetic field first arose even before graphene sheets had been isolated, in the context of carbon nanotubes (which are simply rolled-up graphene). In early 2010, theorist Francisco Guinea of the Institute of Materials Science of Madrid and his colleagues developed these ideas and predicted that if graphene could be stretched along its three main crystallographic directions, it would effectively act as though it were placed in a uniform magnetic field. This is because strain changes the bond lengths between atoms and affects the way electrons move between them. The pseudo-magnetic field would reveal itself through its effects on electron orbits.

    In classical physics, electrons in a magnetic field travel in circles called cyclotron orbits. These were named following Ernest Lawrence’s invention of the cyclotron, because cyclotrons continuously accelerate charged particles (protons, in Lawrence’s case) in a curving path induced by a strong field.

    Viewed quantum mechanically, however, cyclotron orbits become quantized and exhibit discrete energy levels. Called Landau levels, these correspond to energies where constructive interference occurs in an orbiting electron’s quantum wave function. The number of electrons occupying each Landau level depends on the strength of the field – the stronger the field, the more energy spacing between Landau levels, and the denser the electron states become at each level – which is a key feature of the predicted pseudo-magnetic fields in graphene.

    A serendipitous discovery

    In the patch of graphene inside the roughly circular indentation on a platinum substrate, four triangular nanobubbles appear at the edge of the patch and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows electron densities clustering at discrete Landau levels. Pseudo-magnetic fields are strongest at regions of greatest curvature.

    [Image Details: A patch of graphene at the surface of a platinum substrate exhibits four triangular nanobubbles at its edges and one in the interior. Scanning tunneling spectroscopy taken at intervals across one nanobubble (inset) shows local electron densities clustering in peaks at discrete Landau-level energies. Pseudo-magnetic fields are strongest at regions of greatest curvature.]

    Describing their experimental discovery, Crommie says, “We had the benefit of a remarkable stroke of serendipity.”

    Crommie’s research group had been using a scanning tunneling microscope to study graphene monolayers grown on a platinum substrate. A scanning tunneling microscope works by using a sharp needle probe that skims along the surface of a material to measure minute changes in electrical current, revealing the density of electron states at each point in the scan while building an image of the surface.

    Crommie was meeting with a visiting theorist from Boston University, Antonio Castro Neto, about a completely different topic when a group member came into his office with the latest data. It showed nanobubbles, little pyramid-like protrusions, in a patch of graphene on the platinum surface and associated with the graphene nanobubbles there were distinct peaks in the density of electron states. Crommie says his visitor, Castro Neto, took one look and said, “That looks like the Landau levels predicted for strained graphene.”

    Sure enough, close examination of the triangular bubbles revealed that their chicken-wire lattice had been stretched precisely along the three axes needed to induce the strain orientation that Guinea and his coworkers had predicted would give rise to pseudo-magnetic fields. The greater the curvature of the bubbles, the greater the strain, and the greater the strength of the pseudo-magnetic field. The increased density of electron states revealed by scanning tunneling spectroscopy corresponded to Landau levels, in some cases indicating giant pseudo-magnetic fields of 300 tesla or more.

    “Getting the right strain resulted from a combination of factors,” Crommie says. “To grow graphene on the platinum we had exposed the platinum to ethylene” – a simple compound of carbon and hydrogen – “and at high temperature the carbon atoms formed a sheet of graphene whose orientation was determined by the platinum’s lattice structure.”

    The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), the colors indicate height, not field strength, but the measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model. To get the highest resolution from the scanning tunneling microscope, the system was then cooled to a few degrees above absolute zero. Both the graphene and the platinum contracted – but the platinum shrank more, with the result that excess graphene pushed up into bubbles, measuring four to 10 nanometers (billionths of a meter) across and from a third to more than two nanometers high. To confirm that the experimental observations were consistent with theoretical predictions, Castro Neto worked with Guinea to model a nanobubble typical of those found by the Crommie group. The resulting theoretical picture was a near-match to what the experimenters had observed: a strain-induced pseudo-magnetic field some 200 to 400 tesla strong in the regions of greatest strain, for nanobubbles of the correct size.

    [Image Details: The colors of a theoretical model of a nanobubble (left) show that the pseudo-magnetic field is greatest where curvature, and thus strain, is greatest. In a graph of experimental observations (right), colors indicate height, not field strength, but measured field effects likewise correspond to regions of greatest strain and closely match the theoretical model.]

    “Controlling where electrons live and how they move is an essential feature of all electronic devices,” says Crommie. “New types of control allow us to create new devices, and so our demonstration of strain engineering in graphene provides an entirely new way for mechanically controlling electronic structure in graphene. The effect is so strong that we could do it at room temperature.”

    The opportunities for basic science with strain engineering are also huge. For example, in strong pseudo-magnetic fields electrons orbit in tight circles that bump up against one another, potentially leading to novel electron-electron interactions. Says Crommie, “this is the kind of physics that physicists love to explore.”

    “Strain-induced pseudo-magnetic fields greater than 300 tesla in graphene nanobubbles,” by Niv Levy, Sarah Burke, Kacey Meaker, Melissa Panlasigui, Alex Zettl, Francisco Guinea, Antonio Castro Neto, and Michael Crommie, appears in the July 30 issue of Science. The work was supported by the Department of Energy’s Office of Science and by the Office of Naval Research. I’ve contacted Crommy to provide more details of research. Hope, soon I’ll get a response from him,

    [Source: News Center]