Sunday, October 18, 2009

Growing Geodesic Carbon Nanodomes

 

Researchers analyzing the assembly of graphene (sheets of carbon only one atom thick) on a surface of iridium have found that the sheets grow by first forming tiny carbon domes. The discovery offers new insight into the growth of graphene layers and points the way to possible methods for assembling components of graphene-based computer circuits.
 
Paolo Lacovig, Monica Pozzo, Dario Alfè, Paolo Vilmercati, Alessandro Baraldi, and Silvano Lizzit at institutions in Italy, the UK and USA report their discovery in a paper appearing October 12 in the journal Physical Review Letters.

The researchers' spectroscopic study suggests that graphene grows in the form of tiny islands built of concentric rings of carbon atoms. The islands are strongly bonded to the iridium surface at their perimeters, but are not bonded to the iridium at their centers, which causes them to bulge upward in the middle to form minuscule geodesic domes. By adjusting the conditions as the carbon is deposited on the iridium, the researchers could vary the size of the carbon domes from a few nanometers to hundreds of nanometers across.

Investigating the formation of graphene nanodomes helps physicists to understand and control the production of graphene sheets. In combination with methods for adjusting the conductivity of graphene and related materials, physicists hope to replace electronics made of silicon and metal with tiny, efficient carbon-based chips.

Jorge Sofo and Renee Diehl (Penn State University) highlight the graphene nanodome research in a Viewpoint in the October 12 issue of Physics.

Human-like Vision Lets Robots Navigate Naturally


A robotic vision system that mimics key visual functions of the human brain promises to let robots manoeuvre quickly and safely through cluttered environments, and to help guide the visually impaired.

It’s something any toddler can do – cross a cluttered room to find a toy.

It's also one of those seemingly trivial skills that have proved to be extremely hard for computers to master. Analysing shifting and often-ambiguous visual data to detect objects and separate their movement from one’s own has turned out to be an intensely challenging artificial intelligence problem.

Three years ago, researchers at the European-funded research consortium Decisions in Motion (http://www.decisionsinmotion.org/) decided to look to nature for insights into this challenge.

In a rare collaboration, neuro- and cognitive scientists studied how the visual systems of advanced mammals, primates and people work, while computer scientists and roboticists incorporated their findings into neural networks and mobile robots.

The approach paid off. Decisions in Motion has already built and demonstrated a robot that can zip across a crowded room guided only by what it “sees” through its twin video cameras, and are hard at work on a head-mounted system to help visually impaired people get around.

“Until now, the algorithms that have been used are quite slow and their decisions are not reliable enough to be useful,” says project coordinator Mark Greenlee. “Our approach allowed us to build algorithms that can do this on the fly, that can make all these decisions within a few milliseconds using conventional hardware.”

How do we see movement?

The Decisions in Motion researchers used a wide variety of techniques to learn more about how the brain processes visual information, especially information about movement.

These included recording individual neurons and groups of neurons firing in response to movement signals, functional magnetic resonance imaging to track the moment-by-moment interactions between different brain areas as people performed visual tasks, and neuropsychological studies of people with visual processing problems.

The researchers hoped to learn more about how the visual system scans the environment, detects objects, discerns movement, distinguishes between the independent movement of objects and the organism’s own movements, and plans and controls motion towards a goal.

One of their most interesting discoveries was that the primate brain does not just detect and track a moving object; it actually predicts where the object will go.

“When an object moves through a scene, you get a wave of activity as the brain anticipates its trajectory,” says Greenlee. “It’s like feedback signals flowing from the higher areas in the visual cortex back to neurons in the primary visual cortex to give them a sense of what’s coming.”

Greenlee compares what an individual visual neuron sees to looking at the world through a peephole. Researchers have known for a long time that high-level processing is needed to build a coherent picture out of a myriad of those tiny glimpses. What's new is the importance of strong anticipatory feedback for perceiving and processing motion.

“This proved to be quite critical for the Decisions in Motion project,” Greenlee says. “It solves what is called the ‘aperture problem’, the problem of the neurons in the primary visual cortex looking through those little peepholes.”

Building a better robotic brain

Armed with a better understanding of how the human brain deals with movement, the project’s computer scientists and roboticists went to work. Using off-the-shelf hardware, they built a neural network with three levels mimicking the brain’s primary, mid-level, and higher-level visual subsystems.

They used what they had learned about the flow of information between brain regions to control the flow of information within the robotic “brain”.

“It’s basically a neural network with certain biological characteristics,” says Greenlee. “The connectivity is dictated by the numbers we have from our physiological studies.”

The computerised brain controls the behaviour of a wheeled robotic platform supporting a moveable head and eyes, in real time. It directs the head and eyes where to look, tracks its own movement, identifies objects, determines if they are moving independently, and directs the platform to speed up, slow down and turn left or right.

Greenlee and his colleagues were intrigued when the robot found its way to its first target – a teddy bear – just like a person would, speeding by objects that were at a safe distance, but passing nearby obstacles at a slower pace.

”That was very exciting,” Greenlee says. “We didn’t program it in – it popped out of the algorithm.”

In addition to improved guidance systems for robots, the consortium envisions a lightweight system that could be worn like eyeglasses by visually or cognitively impaired people to boost their mobility. One of the consortium partners, Cambridge Research Systems, is developing a commercial version of this, called VisGuide.

Decisions in Motion received funding from the ICT strand of the EU’s Sixth Framework Programme for research. The project’s work was featured in a video by the New Scientist in February this year.

Asteroid Is Actually A Protoplanet, Study Of First High-resolution Images Of Pallas Confirms


Britney E. Schmidt, a UCLA doctoral student in the department of Earth and space sciences, wasn't sure what she'd glean from images of the asteroid Pallas taken by the Hubble Space Telescope. But she hoped to settle at least one burning question: Was Pallas, the second-largest asteroid, actually in that gray area between an asteroid and a small planet?

The answer, she found, was yes. Pallas, like its sister asteroids Ceres and Vesta, was that rare thing: an intact protoplanet.

"It was incredibly exciting to have this new perspective on an object that is really interesting and hadn't been observed by Hubble at high resolution," Schmidt said of the first high-resolution images of Pallas, which is believed to have been intact since its formation, most likely within a few million years of the birth of our solar system.

"We were trying to understand not only the object, but how the solar system formed," Schmidt said. "We think of these large asteroids not only as the building blocks of planets but as a chance to look at planet formation frozen in time."

The research appears Oct. 9 in the journal Science.

"To have the chance to use Hubble at all, and to see those images come back and understand automatically this could change what we think about this object — that was incredibly exciting to me," Schmidt said.

Pallas, which is named for the Greek goddess Pallas Athena, lies in the main asteroid belt between the orbits of Jupiter and Mars. Schmidt likens it to the size of Arizona, her home state. The massive body is unique, she said, partly because "its orbit is so much different from other asteroids. It's highly inclined."

Hubble had tried to snap pictures of the round-shaped body before but came up short. So when the space telescope took images again in September 2007, Schmidt and her colleagues had several goals.

"We wanted to learn about Pallas itself — what its shape is like, what its surface is like, does it have large impact craters, does it have significant topography," she said.

With the Hubble images, Schmidt and her colleagues were able to take new measurements of Pallas' size and shape. They were able to see that its surface has areas of dark and light, indicating that the water-rich body might have undergone an internal change in the same way planets do.

Pallas wasn't just a big rock made of hydrated silicate and ice, they found.

"That's what makes it more like a planet — the color variation and the round shape are very important as far as understanding, is this a dynamic object or has it been exactly the same since it's been formed?" Schmidt said. "We think it's probably a dynamic object."

For the first time, Schmidt and her colleagues also saw a large impact site on Pallas. They were unable to determine if it was a crater, but the depression did suggest something else important: that it could have led to Pallas' small family of asteroids orbiting in space.

"It's interesting, because there are very few large, intact asteroids left," Schmidt said. "There were probably many more. Most have been broken up completely. It's an interesting chance to almost look into the object, at the layer underneath. It's helping to unravel one of the big questions that we have about Pallas, why does it have this family?"

Schmidt's co-authors include Peter C. Thomas, a senior researcher at Cornell University; James Bauer, a researcher with the Jet Propulsion Laboratory; J.Y. Li, a postdoctoral student at the University of Maryland; Schmidt's Ph.D. adviser, UCLA professor of geophysics and space physics Christopher T. Russell; Andrew Rivkin, a researcher at Johns Hopkins University; Joel William Parker, a researcher at the Southwest Research Institute in Boulder, Colorado; Lucy McFadden, a faculty member at the University of Maryland; S. Alan Stern of the Southwest Research Institute; Max Mutchler, a researcher at the Space Telescope Sciences Institute; and Chris Radcliffe, a digital artist in Santa Monica.

"When people think of asteroids, they think of 'Star Wars' or of tiny little rocks floating through space," Schmidt said. "But some of these have been really physically dynamic. Around 5 million years after the formation of the solar system, Pallas was probably doing something kind of interesting."