2011

September

ASCR Monthly Computing News Report - September 2011



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.
 
In this issue:
 
 
 
 

RESEARCH NEWS:

NERSC Simulations Unlock Secrets of Cellulose Deconstruction
Pull up to the pump these days and chances are your gas will be laced with ethanol, a biofuel made from corn. Corn ethanol is relatively easy to make, but with growing populations and shrinking farmland, there will never be enough of the starchy food crop to both feed and fuel the world. That’s why researchers are working on “grassoline,” liquid biofuels made from hardy, high yielding, non-food crops like switchgrass. But what makes these crops indigestible to humans also makes them challenging raw materials for biofuel production. The sugars needed to make biofuels are locked up tight in cellulose, and researchers have yet to figure out an economical, scalable way to break them loose
 
Recent computer simulations carried out at the National Energy Research Scientific Computing Center (NERSC) could help scientists do just that. The simulations show how two different solvents help unzip cellulose’s structure, peeling off long strings of sugars. That information should help researchers engineer molecules that do the job better, cheaper, and on a larger scale, says Jhih-Wei Chu, a chemical and biomolecular engineering professor at the University of California Berkeley who ran the simulations at NERSC
 
Results of the research were published in the July 28, 2011 and Aug. 30, 2011 issues of the Journal of the American Chemical Society.
 
Researchers Use Jaguar to Provide Close-Up Look at Next-Generation Biofuels
A team led by Oak Ridge National Laboratory’s (ORNL’s) Jeremy Smith has taken a substantial step in the quest for cheaper biofuels by revealing the surface structure of lignin clumps down to 1 angstrom (equal to a 10 billionth of a meter, or smaller than the width of a carbon atom). The team’s conclusion, that the surface of these clumps is rough and folded, even magnified to the scale of individual molecules, was published June 15, 2011, in the journal Physical Review E
 
Smith’s team employed two of ORNL’s strengths—simulation on Jaguar and neutron scattering—to resolve lignin’s structure at scales ranging from 1 to 1,000 angstroms. Its results are important because lignin is a major impediment to the production of cellulosic ethanol, preventing enzymes from breaking down cellulose molecules into the sugars that will eventually be fermented
 
This research and similar projects have the potential to make bioethanol production more efficient and less expensive in a variety of ways, Petridis noted. For example, earlier experiments showed that some enzymes are more likely to bind to lignin than others. The understanding of lignins provided by this latest research opens the door to further investigation into why that’s the case and how these differences can be exploited. The research promises to be very enlightening, Petridis said, especially because it delves into areas that could not be fully explored before.
Contact: George Karniadakis, george_karniadakis@brown.edu
 
NERSC Simulations Reveal How New Polymer Could Lead to Better Batteries
Lithium-ion batteries are everywhere, in smart phones, laptops, an array of other consumer electronics, and the newest electric cars. Good as they are, they could be much better, especially when it comes to lowering the cost and extending the range of electric cars. To do that, batteries need to store a lot more energy. A team of scientists at DOE’s Lawrence Berkeley National Laboratory (Berkeley Lab) has designed a new kind of anode — a critical energy-storing component — capable of absorbing eight times the lithium of current designs. The new type anode has maintained its greatly increased energy capacity after over a year of testing and many hundreds of charge-discharge cycles
 
The secret is a tailored polymer that conducts electricity and binds closely to lithium-storing silicon particles, even as they expand to more than three times their volume during charging and then shrink again during discharge. The new anodes are made from low-cost materials, compatible with standard lithium-battery manufacturing technologies. Using supercomputers at the National Energy Research Scientific Computing Center (NERSC), the team ran ab initio calculations of the promising polymers until they achieved this result. The research team reports its findings in Advanced Materials on Sept. 23, 2011.
 
Simulation of Shock-Induced Cavitation Damage Conducted at ALCF
Maintaining the soundness of nuclear reactors is a major concern for scientists, engineers and the general public. Among many factors, “cavitation erosion” of cooling system components is a significant mechanism for long-term degradation in nuclear power plants. Cavitation occurs when a liquid experiences rapid change in pressure that creates low-pressure cavities within the liquid. These cavities, or cavitation bubbles, cause stress when they collapse and hit a solid surface, and therefore cause deterioration of the surfaces of materials
 
However, cavitation bubbles also provide benefits. Nanobubbles are used to prevent stress corrosion cracking (SCC)—the biggest reason the lifetime of nuclear reactors is shortened. When nanobubbles form, they create low-pressure regions, but when they collapse near a solid surface, the result is the creation of high-pressure areas that relieve the tensile stresses that cause SCC in the material
 
To get a molecular-level understanding of nanobubble collapse near a solid surface, Priya Vashishta and his colleagues, Rajiv Kalia and Aiichiro Nakano, at the University of Southern California (USC) are using Intrepid, the IBM Blue Gene/P system at the Argonne Leadership Computing Facility (ALCF), to simulate and unravel the complex mechanochemistry problem. The 1-billion-atom simulation is feasible because it runs efficiently on 163,840 cores, the full Intrepid system. The goal of this nanobubble collapse simulation is to understand molecular processes to improve both the safety and longevity of nuclear reactors. The efficiency with which these simulations run on Intrepid is the result of successful work conducted by the USC group using an ALCF discretionary allocation in 2010.
1
2
3
Billion-atom reactive molecular dynamics simulation of nanobubble collapse in water near a ceramic surface under shock compression. A 2 km/sec shock wave compresses the nanobubble and creates high compressive stress and novel chemical reactions (production of hydronium ions) not found under normal conditions. The simulations reveal that high pressure in the shock wave deforms the ceramic surface and also accelerates water molecules from the bubble periphery towards the center of the bubble. These high-velocity water molecules bunch up to form a nanojet. The nanojet impact creates damage on the ceramic surface. The simulation results reveal atomistic mechanisms of mechanically induced chemistry, which is the key to understanding the safety-threatening damage in nuclear reactors.
Contact: Priya Vashishta, priyav@usc.edu

PEOPLE:

LBNL’s Kathy Yelick Appointed to National Academies Computer Science Board
Berkeley Lab Associate Laboratory Director for Computing Sciences Kathy Yelick has been appointed to the Computer Science and Telecommunications Board (CSTB) of the National Academies, which includes the National Academy of Engineering, National Academy of Sciences, and the Institute of Medicine. CSTB is composed of nationally recognized experts from across the information technology fields and complementary fields germane to the Board's interests in IT and society. Board members are appointed by the National Academies following a rigorous vetting process, and they serve staggered terms of three to five years
 
Yelick was previously a member of the CSTB’s Committee on Sustaining Growth in Computing Performance, which published the report The Future of Computing Performance: Game Over or Next Level?External link earlier this year
 
CSTB was established in 1986 to provide independent advice to the federal government on technical and public policy issues relating to computing and communications. CSTB conducts studies of critical national issues that recommend actions or changes in actions by government, industry, academic researchers, and the larger nonprofit sector.
 
Argonne Distinguished Fellow Lusk Gives Keynote Talk at International Conference
Ewing “Rusty” Lusk, an Argonne distinguished fellow, gave the Sept. 7 opening keynote address at the 2011 International Computer Science and Engineering Conference (ICSEC), held in Bangkok, Thailand. Lusk’s presentation, titled “Programming Extreme-Scale Computers,” focused on the possibilities for new scientific breakthroughs enabled by next-generation computers, the challenges these computers will present to applications developers, and the programming approaches being readied to address those challenges. ICSEC brings together experts in computer science, computer engineering, information technology, and information security to discuss theoretical concepts, practical ideas, and the state-of-the-art results.
For more information, visit the conference website.External link
 
LBNL’s Michael Wehner to Give Invited Talk at Climate Change Beijing
Michael Wehner Michael Wehner, a staff scientist in the Berkeley Lab's Computational Research Division, will give an invited talk on “Projections of Extreme Weather in a Changing Climate: Balancing Confidenceand Uncertainty” at Climate Change BeijingExternal link, an international climate change conference hosted by the Chinese Academy of Sciences, the National Science Foundation of China and CSIRO, Australia.
 
As part of this conference, the world’s leading climate scientists, industry leaders, government representatives, students and members of the general public will meet in Beijing, China from October 18–20 to discuss the most important environmental issues of our time. The Chinese Academy of Sciences has established the conference to promote information sharing and action, with eminent speakers catalyzing scientific advances and collaboration between national and international participants.
 
Jack Wells Discusses New Role at ORNL
On July 1 Jack C. Wells became the director of science for the National Center for Computational Sciences (NCCS) at Oak Ridge National Laboratory (ORNL). The NCCS is a DOE Office of Science user facility for capability computing. Its Oak Ridge Leadership Computing Facility (OLCF) houses Jaguar, America’s fastest supercomputer, used by researchers to solve pressing science and energy challenges via modeling and simulation
 
In this interview with HPCwire, Wells describes his vision for executing a scientific strategy for the NCCS that ensures cost-effective, state-of-the-art computing to facilitate DOE’s scientific missions. To begin this decade’s transition to exaflop computing, capable of carrying out a million trillion floating point operations per second, plans are in the works for a staged upgrade of Jaguar, a high-performance computing system employing traditional CPU microprocessors, to transform it into Titan, a hybrid system employing both CPUs and GPUs.
 

FACILITIES/INFRASTRUCTURE:

ORNL Awards Contract to Cray for ‘Titan’ Supercomputer
The Department of Energy’s Oak Ridge National Laboratory has awarded a contract to Cray Inc. to increase the Jaguar supercomputer’s science impact and energy efficiency. The upgrade, which will provide advanced capabilities in modeling and simulation, will transform the DOE Office of Science-supported Cray XT5 system, currently capable of 2.3 million billion calculations per second (petaflops), into a Cray XK6 system with a peak speed between 10 and 20 petaflops
 
The new system will employ the latest AMD Opteron central processing units (CPUs) as well as NVIDIA Tesla graphics processing units (GPUs)—energy-efficient processors that accelerate specific types of calculations in scientific application codes. The last phase of the upgrade is expected to be completed in late 2012. The system, which will be known as Titan, will be ready for users in early 2013.
 
“Titan will allow for significantly greater realism in models and simulations, and the resulting scientific breakthroughs and technological innovations will provide the return on this national investment,” said ORNL Director Thom Mason. “Discoveries that take weeks even on a system as powerful as Jaguar might take days on Titan.”
Read more about TitanExternal link
 
OLCF Vis System to Receive First Upgrade
The OLCF’s premier visualization system, Lens, is due to receive an upgrade by the end of 2011. The 32-node Linux cluster dedicated to visualization and data analysis will soon receive 45 additional nodes, with 16 cores per additional node, giving it a total of 77 nodes, each of which will feature 2.3 Ghz AMD processors. The new nodes will each feature 128 gigabytes of memory, giving the entire system approximately 7.8 terabytes of total memory. This is the system’s first upgrade since it was installed in 2008.
 
The primary purpose of Lens is to enable data analysis and visualization of simulation data generated on Jaguar, the OLCF’s flagship supercomputing system, currently ranked as the third most powerful in the world. Members of allocated Jaguar projects will automatically be given accounts on Lens. The upgrade will substantially expand Lens’s compute and memory footprints, keeping pace with Jaguar’s future upgrade schedule.
 
 
ESnet Tests Interoperability of OSCARS at International Plugfest
At the 11th Annual Global LambdaGrid WorkshopExternal link held Sept. 13-14 in Rio de Janeiro, Brazil, research and education (R&E) network operators, network vendors, and researchers who support the paradigm of lambda networking met to share developments in their own R&E networks. ESnet’s Inder Monga presented new developments in the Advanced Networking Initiative. On Tuesday, September 13, ESnet participated in a Network Services Interface (NSI) protocol “plugfest” with OSCARSExternal link, its award-winning On-Demand Secure Circuits and Advance Reservation System software, testing it against other bandwidth reservation software to determine its level of interoperability and find any issues with specifications.
 
The meeting was hosted by GLIF, or Global Lambda Integrated Facility, an international virtual organization that promotes the paradigm of lambda networking, which uses dedicated high-capacity circuits based on optical wavelengths.
 

OUTREACH & EDUCATION:

NERSC Hosts Annual DOE Workshop on HPC Best Practices
The DOE Workshop on HPC Best Practices: File Systems and Archives was held Sept. 26–27 in San Francisco. NERSC Storage Systems Group Lead Jason Hick is chairing this year’s workshop. More than 60 participants came from across the U.S. as well as Europe and Japan
 
This workshop addressed current best practices for the procurement, operation and usability of file systems and archives, and addressed whether system challenges can be met by evolving current practices. A report will present findings to DOE and other stakeholders.
Contact: Karen Devine, kddevin@sandia.gov
 
Brookhaven Co-Sponsors Workshop on HPC’s Competitive Advantage to Industry
The OLCF’s Rebecca Hartman-Baker and Adam Simpson added their expertise recently to a weeklong workshop designed to bring more chemistry faculty and students into the world of high-performance computing (HPC). The training, entitled “HPC in Chemistry II,” was held August 8–12 at the University of Tennessee-Knoxville (UTK) campus
 
The workshop offered college chemistry students and professors high-performance computing courses led by ORNL and UTK staffers. Hartman-Baker explained that the number-crunching ability of supercomputers is especially suited for chemistry research, adding that OLCF programmers at the workshop were able to meet and establish ties with future computational chemists. Simpson, who lectured on introductory computing with graphics processing units (GPUs), said the workshop offered a great outreach opportunity to help users transition to the next level of high-performance computing
 
ORNL research associate Benjamin Mintz was a 2009 participant in the first “HPC in Chemistry” event and co-organizer of the current workshop. Mintz described the hands-on activities as targeted for beginner to intermediate chemistry software developers, ranging from simple “hello world” MPI examples, to vector addition on general-purpose GPUs, to the writing of a parallel Monte Carlo program.
For more information please see https://sites.google.com/site/hpcinchemistry/External link
 
Berkeley Lab’s Brown and Bell Co-Organize DOE Applied Math Workshop
Berkeley Lab’s Computational Research Division Director David Brown and Center for Computational Sciences and Engineering (CCSE) head John Bell, as well as Mihai Anitescu (Argonne National Lab), Michael Ferris (Univ. of Wisconsin), and Robert Moser (Univ. of Texas, Austin), organized the recent DOE Applied Math Workshop on “Mathematics for the Analysis, Simulation, and Optimization of Complex Systems”External link that was held Sept. 13–14, 2011
 
The goal of this workshop was to identify new research areas in applied mathematics that will complement and enhance the existing DOE ASCR Applied Mathematics Program efforts that address the understanding of complex systems. The report that will result from this workshop will be a valuable resource for defining the potential impact of research in applied mathematics on problems important to the mission of DOE.
Details of the workshop can be found at this linkExternal link
 
Titan Summit Shares OLCF and User Expectations
Users, vendors, and OLCF staff got a chance recently to share plans and information regarding the OLCF’s next major system, Titan. The Titan SummitExternal link workshop, held August 15–17, 2011, covered the evolution to the next level of high-performance computational resources, enabling more groundbreaking research in climate, energy creation and storage, biology, chemistry, astrophysics, and materials
 
The OLCF’s current leadership system, Jaguar, will evolve into Titan, with significant differences. In particular, the upgraded machine will be a CPU/GPU (central processing unit/graphics processing unit) hybrid system, expected to perform up to 20 thousand trillion calculations per second (20 petaflops) and reach up to nine times the performance of Jaguar. By using the summit to bring together users and vendors, OLCF staff were able to discuss expectations and plans for the new system.
 
On the summit’s first day, ORNL staff presented case studies and methods for exposing parallelism in code. Vendors spoke on the second day, focusing on compilers, debuggers, and optimizers—software to help users with codes. On the final day, users talked about their applications and research topics. The 63 attendees included current users, industry representatives, and personnel from National Science Foundation and Department of Energy facilities. A January 2012 workshop on Titan is also being prepared.
 
Getting Started at the ALCF
The Getting Started Workshop, held October 4–-5 at the Argonne Leadership Computing Facility (ALCF), provided 23 ALCF users, potential users and postdocs with key information on services and resources, as well as the techniques and knowledge needed to use the systems at the ALCF. Staff presented the following topics and provided assisted hands-on exercises where applicable:
  • Blue Gene/P architecture
  • ALCF infrastructure
  • Software environment
  • Debugging
  • Visualization systems and services
  • RepastHPC, a high-performance computing implementation of the Repast (Java) toolkit for Agent-Based Modeling
  • Globus Online
Contact: David Martin, dem@alcf.anl.gov or Chel Heinzel, (chel@alcf.anl.gov)
 
Brookhaven Co-Sponsors Workshop on HPC’s Competitive Advantage to Industry
Leaders in academia, industry, and government will meet at Rensselaer Polytechnic Institute Oct. 26-28 to discuss strategies for leveraging the awesome power of supercomputers to drive growth, innovation, and competitive advantage for American companies. The discussions are part of a three-day national workshop titled “Providing Competitive Advantage to Industry through High-Performance Computing: Accomplishments and a Path Forward.” The conference is sponsored by the New York State High Performance Computing Consortium (HPC2), a partnership between science and education organization NYSERNet, Rensselaer, University at Buffalo, Stony Brook University, and Brookhaven National Laboratory.

 

 

 

 

 

 

Last modified: 3/18/2013 10:12:32 AM