ASCR Monthly Computing News Report - December 2008
The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.
In this issue...
Office of Science Announces 2009 INCITE Allocations
The U.S. Department of Energy's (DOE) Office of Science announced Dec. 18 that 66 projects addressing some of the greatest scientific challenges have been awarded access to some of the world's most powerful supercomputers at DOE national laboratories. The projects-competitively selected for their technical readiness and scientific merit-will advance research in key areas such as astrophysics, climate change, new materials, energy production and biology. The allocations of supercomputing and data storage resources will be made under DOE's Innovative and Novel Computational Impact on Theory and Experiment
(INCITE) program, which supports computationally intensive, large-scale research projects. After a selection process that included a peer review of each proposal for scientific merit and computational readiness, nearly 900 million processor-hours are being awarded to 25 new projects and 41 renewal projects.
Under INCITE, researchers are being awarded time on supercomputers at Argonne, Lawrence Berkeley, Oak Ridge and Pacific Northwest national laboratories. Twenty-eight research projects have been awarded 400 million hours of computing time at Argonne's Leadership Computing Facility. Of the INCITE projects that will use the energy-efficient Blue Gene/P at Argonne, 10 are new projects and 18 are projects renewed from 2008. Seven projects were awarded a total of 17,460,000 processor-hours on the Cray XT supercomputer at the NERSC Center at Berkeley Lab. ORNL will make nearly 470 million processor hours available on Jaguar, its Cray XT supercomputer for 38 separate projects in areas such as climate studies, energy assurance, materials, and other areas of fundamental science. Two projects will compute for a total of 2,100,000 hours on the Hewlett Packard system at PNNL. (Note: Some projects received computing time at more than one facility.)
Humanities and High Performance Computers Connect at NERSC
High performance computing and the humanities are finally connecting - with a little matchmaking help from the Department of Energy (DOE) and the National Endowment for the Humanities (NEH). The organizations have teamed up to create the Humanities High Performance Computing Program, a one-of-a-kind initiative that gives humanities researchers access to some of the world's most powerful supercomputers. As part of this special collaboration, NERSC will dedicate a total of one million compute hours on its supercomputers and extensive technical training to humanities experts to bring them up to speed on HPC. Three projects have been selected through a highly competitive peer review process led by the NEH's Office of Digital Humanities to participate in the program's inaugural run, which will begin in January 2009.
- The Perseus Digital Library Project, led by Gregory Crane of Tufts University in Medford, Mass., will use NERSC systems to measure how the meanings of words in Latin and Greek have changed over their lifetimes, and compare classic Greek and Latin texts with literary works written in the past 2,000 years.
- While the Perseus project analyzes literature, the Visualizing Patterns in Databases of Cultural Images and Video project, led by Lev Manovich, Director of the Software Studies Initiative at the University of California, San Diego, will use NERSC resources to quantitatively measure changes in visual and media art trends.
- NERSC computers will also be reconstructing ancient artifacts and architecture with the High Performance Computing for Processing and Analysis of Digitized 3-D Models of Cultural Heritage project, led by David Koller, Assistant Director of the University of Virginia's Institute for Advanced Technology in the Humanities (IATH) in Charlottesville, Va.
LANL Researchers Improve Behavior of Asymptotic-Preserving Operator-Splitting Methods
Kinetic transport equations are used to describe the movement of particles through a fixed material medium. In cases where there is a high rate of collisions with the medium, the kinetic description relaxes to a scalar function governed by a diffusion equation. A numerical discretization that is consistent independent of the collision rate is called "asymptotic-preserving". Unlike many hyperbolic solvers, methods of this type can accurately capture the diffusion limit without resolving mean-free-path dynamics.
A popular method for designing asymptotic-preserving schemes is to employ intelligent operator-splitting techniques. This approach has been used for a variety of kinetic transport models with diffusive relaxation. However, recent work published in "Oscillatory Behavior of Asymptotic-Preserving Splitting Methods for a Linear Model of Diffusive Relaxation," Kinetic and Related Models, 1 (2008), pp. 573-590, shows that oscillations can develop in transition regimes (where collisions are important, but not frequent enough to make the system diffusive). It is determined that in these regimes, the splitting method suffers from too little numerical dissipation. An alternative approach is given that effectively adds dissipation back into the system in a way that suppresses oscillations, while still maintaining accuracy. This work was done as part of ASCR Applied Mathematics Research Project, "Mimetic Finite Difference Methods for Partial Differential Equations."
GE Global Research at the ALCF Enables Next-Generation Energy and Propulsion
Aerodynamic noise is a barrier technology to the viability of next-generation "green" low-emission aircraft propulsion (jet and fan noise) and energy systems (wind turbine blade noise). Scientists at GE Global Research (GEGR) are actively developing design technologies to understand and reduce such noise sources. For aircraft engine noise, jet noise is the dominant noise source during take-off, and the complex turbulence flows that drive its generation are not fully understood. Hence, accurate and detailed multi-scale numerical simulations for realistic jet noise prediction can prove to be a game-changer in future development efforts.
Applying an allocation of computer processor hours from the Argonne Leadership Computing Facility (ALCF) to GEGR, researchers have ported, tuned, and demonstrated both the scalability and accuracy of a large eddy simulations (LES) solver on Argonne's IBM Blue Gene/P system for massively parallel jet noise simulation. As part of this effort, the memory and I/O bottlenecks have been successfully mitigated to enable parallel computation using the large number of processors needed for realistic, nonacademic jet noise simulations. Excellent scalability performance of the solver, up to 4,096 cores, for both large- and small-sized problems has been demonstrated. The turbulent jet flow has reached a statistically stationary state, and the sampled flow (jet centerline velocity and potential core decay) and acoustic (far-field noise spectra) fields agree very well with the experimental data. With this successful proof-of-concept study, future work will focus on further developing and demonstrating the viability and potential impact of massively parallel, multi-scale LES simulations for noise prediction.
MIT Technology Review Cites LBNL's "Green Flash" as One of 2008's Hot Topics
In their annual look back at the year in computing, MIT Technology Review IT Editor Kate Greene hailed several "Microprocessor Makeovers," including Berkeley Lab's proposed climate supercomputer built around small but powerful processors such as those used in cell phones and other consumer electronics. "At the Lawrence Berkeley National Lab, researchers found a way to get more performance out of a supercomputer than ever before (while also slashing power consumption): by borrowing design tricks from the cell-phone industry". (See A Smarter Supercomputer
), wrote Greene. The proposed architecture, dubbed Green Flash, is described at: http://www.lbl.gov/CS/html/greenflash.html
Sandia ASCR Researcher Bert Debusschere Receives PECASE Award
Bert Debusschere was awarded the 2007 Presidential Early Career Award for Scientists and Engineers (PECASE) and the DOE Office of Science Early Career Award in Science and Engineering for his ASCR funded work on the development of analysis methods for stochastic dynamical systems. His work focuses on chemical and biochemical stochastic reaction networks, which are prevalent in molecular level and biological systems, in application areas ranging from energy, bioremediation, to the human immune system. Bert Debusschere received the award on December 19, 2008 at a White House and DOE ceremony.
Argonne's Leadership Computing Facility Helps Researcher Win Sackler Prize
David Baker, University of Washington (UW) professor of biochemistry and an investigator at the Howard Hughes Medical Research Institute, was awarded the 2008 Raymond & Beverly Sackler International Prize in Biophysics on December 15. Baker was honored for his significant contributions to computer-based studies of the manner and the speed in which chains of amino acids fold into protein molecules.
He conducted his work on the IBM Blue Gene/P at the Argonne Leadership Computing Facility, using 12 million computer processor hours awarded by the Department of Energy's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. Baker developed computer programs to predict protein structures from amino acid sequences in DNA. His program, Rosetta, is among the most accurate. He has combined data from nuclear magnetic resonance imaging and X-ray defraction imaging with his computer modeling to more quickly delineate protein molecule structures. He also researches the ways that molecular configurations of proteins determine their functions in biochemical reactions. The Rosetta method has been tested on several proteins of known structure up to 189 amino acids in length. In many cases, the accuracy of the prediction was within a remarkable one angstrom of the experimentally solved high-resolution crystal structure. "DOE's INCITE program has been critical to the progress made in protein structure modeling using Rosetta," said Baker.
The international Sackler prize was established by arts and sciences philanthropists Dr. Raymond R. Sackler and his wife, Beverly Sackler. Raymond Sackler is a psychiatrist and co-founder of a multinational pharmaceutical company. The field for this year's prize was the physics of structure formation and self-assembly of proteins and nucleic acids.
LLNL's Lori Diachin Named Associate Editor for SIAM Journal
Lori Diachin of Lawrence Livermore National Laboratory and PI for the SciDAC Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS) has been selected for a three-year term as an associate editor for the Journal on Scientific Computing published by SIAM, the Society for Industrial and Applied Mathematics. This is one of the premier journals for research articles on numerical methods and techniques for scientific computation.
LBNL's Juan Meza Reappointed to SIAM Board of Trustees
Juan Meza, head of Berkeley Lab's High Performance Computing Research Department, has been re-appointed to a second three-year term to the SIAM Board of Trustees. According to the society's bylaws, the Board is responsible for the management of SIAM, taking into account the professional and scientific policies and objectives of the organization. SIAM exists to ensure the strongest interactions between mathematics and other scientific and technological communities through membership activities, publication of journals and books, and conferences.
LBNL's Phil Colella to Give Talks, Meet with Nuclear Modeling Groups in France
Phil Colella, leader of the SciDAC Applied Partial Differential Equations Center and head of LBNL's Applied Numerical Algorithms Group, will traveling to France in January, giving two talks and meeting with researchers at the CEA, the Commissariat à l'Énergie Atomique or the French Atomic Energy Commission. Colella starts his trip at the Jan 26-27 seminar on Numerical Fluid Mechanics at the Institut Henri Poincaré, where he will give a talk on "Embedded boundary methods and software for solving partial differential equations in complex geometries". After that, Colella will be meeting with groups at CEA working on modeling and simulation for nuclear reactors. Finally he will give a talk in the Mathematics Department at the University of Paris Feb. 2.
Moe Khaleel: Computing Architectures Critical to Accelerating Discovery
Pacific Northwest National Laboratory's Moe Khaleel, director of the Computational Sciences and Mathematics Division, spoke about computing for accelerating the energy technology development cycle as the invited speaker at the Science & Technology Discovery Series in Seattle. The series was hosted by the Technology Alliance, a statewide organization composed of leaders from Washington state's high-tech businesses and research institutions. The talk, entitled "Beyond the Desktop: The role of computational architectures in accelerating discovery," focused on the role high performance computers play in accelerating technology development.
Khaleel covered the use of multithreaded architectures as a way to accelerate discovery within disparate data sets. At PNNL's Center for Adaptive Super Computing Software, he and his colleagues are building programs for multithreaded machines. Using a Cray XMT, they are creating models for cyber security network analysis, which looks at vast amounts of Internet traffic and identifies anomalous connections. These links arise between hackers and their prey and are extremely difficult to detect. The models may also have applications to other disparate data including energy grid analysis and bioinformatics.
Howard Walter Appointed NERSC Division Deputy
Howard Walter, who joined NERSC in 1999 and has served as head of the Systems Department since 2005, has been named NERSC Division Deputy by Kathy Yelick. In his new role, Walter will help coordinate across multiple groups and departments within the division, as well as assume responsibility for some LBNL-wide projects and for ongoing operational issues such as safety in the workplace. Since November 2005, Walter has been the head of NERSC's Systems Department. Walter, who came to LBNL from NASA Ames, was initially responsible for shepherding through the planning and development of the new machine room at the Oakland Scientific Facility.
Bill Kramer Leaving NERSC for NCSA
After 12 years at NERSC, Bill Kramer will be leaving his post as general manager to undertake a new position as Deputy Project Director for Blue Waters Project at the National Center for Supercomputing Applications (NCSA), in Urbana, Ill. During his tenure at the Berkeley Lab, Kramer saw NERSC through many major transitions, including a move from the Lawrence Livermore National Laboratory to Berkeley; a migration of the entire user community from vector supercomputers to highly parallel computing; and the design and implementation of both the NERSC system architecture and the NERSC service architecture. This past year, Kramer played an integral role in managing the hardware upgrade of NERSC's Cray XT4 system, called Franklin, to quad-core processors, and setting up the procurement process for the NERSC-6 system, the next major supercomputer acquisition to support the Department of Energy Office of Science's computational challenges.
Argonne's Keahey Describes One-Click Virtual Cluster Deployment
Kate Keahey of Argonne National Laboratory presented a talk at eScience 2008 on Dec. 11, 2008, on how to make a virtual cluster with a single click. Keahey discussed her work on "contextualization" - a new technique that coordinates the exchange and integration of context information, such as network address, at deployment time. The technique has been used to create production virtual clusters for scientific applications on EC2 as well as science clouds. Notable examples are the STAR high energy physics production cluster, which used the workspace contextualization technology to securely coordinate the configuration of its 100-node virtual cluster, and the Alice high-energy physics project at CERN. Keahey's research on adapting workspaces to the needs of the scientific community is supported in part by the DOE SciDAC Center for Enabling Distributed Petascale Science.
Berkeley Lab's Juan Meza Gives Two Talks on Diversity in Computing
Juan Meza, head of the High Performance Computing Research Department at LBNL, was presented with the 2008 Blackwell-Tapia Prize, given every second year in honor of the legacy of David H. Blackwell and Richard A. Tapia. The prize recognizes a mathematical scientist who has contributed and continues to contribute significantly to research in his or her field of expertise, and who has served as a role model for mathematical scientists and students from under-represented minority groups or contributed in other ways to addressing the problem of the under-representation of minorities in mathematics. The award was presented during the fifth biannual Blackwell-Tapia Conference, held to honor David Blackwell and Richard Tapia, two seminal figures who inspired a generation of African-American, Native American and Latino/Latina students to pursue careers in mathematics. Before leaving North Carolina, Meza also spoke to the SIAM Student Chapter at North Carolina State. He then headed to the SC08 conference in Austin, where he was an invited speaker at a Birds-of-a-Feather session at the SC08 conference. The session, organized by Richard Tapia and Roscoe Giles and called "Building a Diverse HPC Community for SC28," also featured presentations by Steve Wallach and Jose Munoz.
PNNL's Stephen Elbert on Energy-Efficient Data Centers
The Channel Register, a major online computer trade news publication, quoted PNNL's Stephen Elbert about the ability of data centers to provide power and cooling for more efficient operation of supercomputers. Power and cooling are significant limitations on scalability for supercomputers. Data centers are currently inefficient. Elbert, in a presentation he gave at the SC08 conference, explained that there are plenty of data centers burning 60 to 70 megawatts and a few have already broken through the 100 megawatts barrier. "Beyond that, you have to be your own power company," said Elbert, a senior research scientist on the Energy Smart Data Center Project at PNNL.
Rediscover ASCR Discovery
After an extended break, the ASCR Discovery webzine is posting new stories that explain and showcase computational science and engineering at DOE laboratories and related work at public and private universities. With support from the Office of Advanced Scientific Computing Research in the Department of Energy Office of Science, ASCR Discovery explores the many ways computing contributes to our understanding of basic scientific questions. Articles go beyond the headlines to cover computational science's legacy, advances in high-performance computing, breakthroughs in applied mathematics and computer science - and the people who are pushing science forward through innovative computing.
Recently posted articles at ASCR Discovery include:
- "Advancing the science of advancing interfaces" describing the research of James Sethian (LBNL) and his team on the application of interface tracking to scientific and engineering applications
- "Sounding out OS noise," an article discussing a collaboration between University of New Mexico researcher Patrick Bridges and Sandia National Laboratories researcher Ron Brightwell on operating system (OS) interference.
The ASCR Discovery webzine, and information on subscriptions, RSS feeds, archival searche. and more are available at
ORNL Announces Series of Workshops
Oak Ridge National Laboratory will host a two-day workshop on Lustre scalability on February 10-11, 2009. This workshop, co-sponsored by Sun Microsystems and Cray, Inc., will focus on improving Lustre scalability to achieve multiple terabytes/sec of bandwidth and manageability of hundreds of petabytes of storage by 2012. Active members of the world's largest Lustre deployments will collaborate with principal Lustre developers from Sun and Cray to identify key scalability issues and develop a realistic roadmap to meet these challenges. This workshop will also serve as preparation for the pre-Lustre Users Group Scalability Summit in April, which will focus on scalability challenges through 2015.
Other upcoming workshops include:
- Introduction to Cray XT, Feb. 24, 2009. This Cray Technical Workshop, jointly presented by the OLCF, NICS (both located at ORNL), and NERSC, will take place in Charleston, SC, and will feature a half-day tutorial on XT systems.
- HUF 2009, March 11-13. This year's HPSS User Forum will be hosted jointly by the OLCF and NICS. HPSS users will share successes, lessons learned, and solutions to unique site issues, while developers and support representatives will discuss directions and new features.
- Introduction to Petaflop HPC, April 1, 2009. This Tapia Celebration of Diversity in Computing will feature a half-day tutorial on petaflop HPC and will be conducted jointly with the OLCF and NICS.