Friday, October 16, 2009

New Software Could Smooth Supercomputing Speed Bumps


Supercomputers have long been an indispensable, albeit expensive, tool for researchers who need to make sense of vast amounts of data. One way that researchers have begun to make high-speed computing more powerful and also more affordable is to build systems that split up workloads among fast, highly parallel graphics processing units (GPUs) and general-purpose central processing units (CPUs).

There is, however, a problem with building these co-processed computing hot rods: A common programming interface for the different GPU models has not been available. Even though the lion's share of GPUs are made by Advanced Micro Devices, Inc. (AMD) and NVIDIA Corp., the differences between the two companies' processors mean that programmers have had to write software to meet the requirements of the particular GPU used by their computers.

Now, this is changing as AMD, NVIDIA and their customers (primarily computer- and game system–makers) throw their support behind a standard way of writing software called the Open Computing Language (OpenCL), which works across both GPU brands. A longer-term goal behind OpenCL is to create a common programming interface that will even let software writers create applications that run both GPUs and CPUs with few modifications, cutting the time and effort required to harness supercomputing power for scientific endeavors.

Researchers at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, Va., are hoping that OpenCL can help them write software that can run on GPUs made either by AMD or NVIDIA. Using a computer equipped with both a CPU and an AMD GPU, the Virginia Tech researchers were able to compute and visualize biomolecular electrostatic surface potential (pdf) 1,800 times faster (from 22.4 hours to less than a minute) than they could with a similar computer driven only by a CPU.

The National Institutes of Health (NIH) has committed more than $1.3 million in funding from 2006 through 2011 for a project led by Alexey Onufriev, an associate professor in Virginia Tech's departments of Computer Science and Physics, to represent water computationally, because water is key to modeling biological molecules. "When you model a molecule at the atomic level," Onufriev says, "you need to know the impact that water will have on that model."

This is the type of program that GPUs map quite well, says Wu Feng, director of Virginia Tech's Synergy Laboratory and an associate professor in the school's departments of Computer Science and Electrical & Computer Engineering. "These applications tend to be compute-intensive and regular in their computation," he adds, "regular in the sense that you're calculating electrostatic potential between pairs of points."

CPUs, however, are better suited than GPUs to computing tasks that require the computer to make a decision. For example, if a string of computing tasks were likened to a line of people waiting to enter a stadium, Feng says, the GPU would be very good at dividing up the people into multiple lines and taking their tickets as they enter—as long as everyone has the same type of ticket. If some people had special tickets that allowed them to go backstage or entitled them to some other privilege, it would greatly slow the GPU's capabilities as the processor decided what to do with the nonconformists. "GPUs work well today when they are given a single instruction for a repetitive task," he adds.

Feng and his team are adapting an electrostatic potential program for Onufriev's lab so that it will work specifically on computers running GPUs made by AMD. Feng notes that as OpenCL is embraced more widely, he will be able to write programs that can communicate with any type of GPU supporting OpenCL, regardless of manufacturer, and eventually write code that provides instructions for both CPUs and GPUs. (Earlier this week, AMD made available the latest version of its software development tools that the company says allows programmers to use OpenCL to write applications that let GPUs operate in concert with CPUs.)

With this type of computing power and versatility, Onufriev says many limitations will be lifted regarding the types of research he can tackle. Another of his projects is studying how the nearly two meters of DNA in each cell is packed into the cell's nucleus. "The way DNA is packed determines the genetic message," he says. "No one knows exactly how this works. We're hoping to get stacks of GPU machines where we can run simulations requiring massive computations that help us better understand DNA packing." Such work would be aided greatly by systems that can make use of both GPUs and CPUs.

No comments:

Post a Comment