2008

November

ASCR Monthly Computing News Report - November 2008



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.

In this issue...
 
 
 
 

RESEARCH NEWS:

DOE Teams Capture Prestigious ACM Gordon Bell Prizes at SC08 Conference
Computational science teams from Oak Ridge and Lawrence Berkeley national laboratories won the prestigious 2008 Association for Computing Machinery (ACM) Gordon Bell Prizes announced during the Nov. 20 awards session at the SC08External link conference. The prize has been awarded since 1987 to recognize outstanding achievement in high-performance computing.
 
A team led by Thomas Schulthess of Oak Ridge National Laboratory received the prize after attaining the fastest performance ever in a scientific supercomputing application, while a team of scientists from Berkeley Lab's Computational Research Division, led by Lin-Wang Wang won a special prize algorithm innovation.
 
Schulthess, leader of ORNL's Computational Materials Science Group who was recently named director of the Swiss National Supercomputing Center, and colleagues Thomas Maier, Michael Summers and Gonzalo Alvarez of ORNL, achieved 1.352 quadrillion calculations a second - or 1.352 petaflops - on ORNL's Cray XT Jaguar supercomputer with a simulation of super-conductors, or materials that conduct electricity without resistance. By modifying the algorithms and software design of its DCA++ code to maximize speed without sacrificing accuracy, the team was able to boost performance tenfold with the help of John Levesque and Jeff Larkin of Cray Inc. Jaguar was recently upgraded to a peak performance of 1.64 petaflops, making it the world's first petaflop system dedicated to open research. The team's simulation made efficient use of 150,000 of Jaguar's 180,000-plus processing cores to explore electrical conductance. Read more at this linkExternal link.
 
The LBNL team won for developing the Linearly Scaling 3D Fragment (LS3DF) Method, which was used to predict the energy harnessing efficiency of nanostructures that can be used in solar cell design. The Berkeley Lab researchers used three of DOE's most advanced scientific computing facilities: NERSC at Berkeley Lab, the Argonne Leadership Computing Facilities (ALCF) and the Oak Ridge Leadership Computing Facilities (OLCF). The LS3DF team consisted of Berkeley Lab's Byounghak Lee, Hongzhang Shan, Zhengji Zhao, Juan Meza, Erich Strohmaier and David Bailey.
 
The team first ran the LS3DF application on 36,864 cores of the Cray XT4 (Franklin) at NERSC, achieving 135 Tflop/s. These initial results at NERSC provided the key scientific insights from the application. The LS3DF application ultimately achieved a speed of 442 teraflop/s (442 trillion calculations per second) on a Cray XT5 system with 147,146 cores at the OLCF. The Berkeley Lab researchers were also able to run the code on the IBM BlueGene/P system at Argonne, reaching 224 teraflop/s on 163,840 cores, or 40.5 percent of the system's peak performance capability. Additional information is available at the following link: http://www.lbl.gov/CS/Archive/news112408.htmlExternal link.
 
Argonne, ORNL Win Big in High Performance Computing Challenge at SC08
Argonne and Oak Ridge national laboratories shared top honors in the annual High Performance Computing (HPC) Challenge Awards at the SC08 Conference in Austin, Texas. The two labs shared the four top awards, each bringing home two gold medals.
 
Argonne was the clear winner in two of the four categories awarded in the HPC Challenge best performance benchmark competition, which were run using 32 racks of Argonne's Blue Gene/P. Oak Ridge's Cray XT5 supercomputer "Jaguar" placed in three out of four categories at the HPC Challenge awards, winning two "gold medals" and one "bronze" in this head-to-head competition. Results of the challenge, which measures excellence at handling computing workloads, were announced Nov. 18 in Austin at SC08, an international gathering of supercomputing professionals.
 
Argonne's score of 103 GUPS (Giga Updates per Second) for Global RandomAccess was almost three times faster than last year's winner. Global RandomAccess measures memory performance and stresses traditional system bottlenecks that are directly correlated to application performance. Argonne also won the Global FFT category, which measures the floating point rate of execution of double precision complex one-dimensional Discrete Fourier Transform, which is used to efficiently transform one function into another scoring 5080 Gflops.
 
"This award proves that energy efficiency and computational power are not mutually exclusive," said Pete Beckman, director of Argonne's Leadership Computing Facility. "We can still push performance boundaries and deliver stellar results while using a fraction of the power typically needed for supercomputers."
 
Jaguar won first place for both speed in solving a dense matrix of linear algebra equations (running a software code called High-Performance Linpack, or HPL) and sustainable memory bandwidth-or how many gigabytes per second a node can fetch and store (running the STREAM code). It won third place for speed in executing the Global-Fast Fourier Transformation, a common algorithm used in many scientific applications.
 
"The Cray Jaguar at ORNL winning two of the HPC Challenge benchmarks shows the power and potential of the computer system for handling some of the most challenging computational science problems," said Jack Dongarra of University of Tennessee-Knoxville and Oak Ridge National Laboratory. "It was able to produce an impressive 902 teraflops [trillion floating point operations per second] on HPL and 330 TB/s [terabytes per second] on STREAMS. Both results leave the second-place IBM Blue Gene/L at Lawrence Livermore National Laboratory far behind and demonstrates the balance between computing and communication bandwidth."
 
Integrated Microbial Genomics Reaches Out to Include Human Microbial Communities
Although we live in a microbial world with millions of organisms in one drop of water and even more in soil, only a tiny fraction of microbes live as independent species, and even fewer of these can be cultured in the laboratory. The vast majority of bacteria and other microorganisms exist only in the wild, and in complex communities. The collective genome of such a microbial community, its total DNA, is called its metagenome. To make sense of these metagenomes, scientists rely on analytic tools like the Integrated Microbial Genomics with Microbiome Samples (IMG/M), developed through a close collaboration of software engineers, computer scientists, and biologists from the Genome Biology and Microbial Ecology programs of DOE's Joint Genome Institute, as well as the Biological Data Management and Technology Center (BDMTC) in Berkeley Lab's Computational Research Division. IMG/M has played a central role in helping scientists understand metagenomes in a variety of natural environments since its initial release in 2006.
 
Now a new grant from the National Institutes of Health (NIH) will expand the system's capabilities to include metagenomic data from humans, giving scientists valuable insights into how microbial communities affect human health. "The success of metagenomics will not only help us better understand human health, but may also help us address a variety of environmental challenges," says Kyrpides, who heads JGI's Genome Biology Program. Read more at the following link:
 
ASCR-Related Research at LLNL Contributes to SC08 Tech Program
LLNL's Computation Directorate contributed to SC08 conference management, tutorials, technical papers and posters, panels and birds-of-a-feather sessions. Key contributions related to our research partnership with ASCR include:
  • Technical Paper: Lessons Learned at 280K: Toward Debugging Millions of Cores (Greg Lee, Dong Ahn, Bronis de Supinski, and Martin Schulz)
  • Technical Paper: Scalable Load-Balance Measurement for SPMD Codes (Bronis de Supinski and Martin Schulz)
  • Technical Paper: High-Performance Multivariate Visual Data Exploration for Extremely Large Data (Hank Childs)
  • Panel: Meeting the Challenges that Face the Nation - National Labs & Supercomputing Centers (Dona Crawford, panelist)
  • Panel: Climate Change: Current Knowledge and Future Challenge - Dean Williams
  • Research Poster: System-Wide Performance Equivalence Class Detection Using Clustering (Bronis de Supinski and Martin Schulz)
  • Tutorial: Interoperable Mesh and Geometry Tools for Advanced Petascale Simulation (Lori Diachin)
 
ORNL, GUMC Sign Formal Collaboration Agreement
ORNL and Georgetown University Medical Center (GUMC) have signed Comprehensive Research and Development Agreement, which spans 5 years, to foster collaboration between the two institutions and give GUMC researchers access to both ORNL's leading supercomputing systems and the lab's expertise in protein and drug modeling. Areas of research will include computational biology, radiation biology, and systems genetics, to name a few. ORNL hosts the world's most powerful computing complex, with two systems exceeding the petascale. GUMC is a leading biomedical research facility and one of America's 41 comprehensive cancer centers.
 
Essentially, GUMC researchers will be able to simulate complex biological systems on the laboratory's first-class computing systems, revealing a more accurate picture of interactions between chemical compounds and diseases. GUMC and its partners have enormous drug libraries that can be tested against different cancer protein targets, said ORNL's Ed Uberbacher, adding that "Current software for drug docking often provides clues about how to build the right drug but usually falls short of directly providing an optimal drug that binds tightly to the target. With the computational power available, we can potentially get more accurate answers more quickly and save time in the drug-development process."
 

PEOPLE:

Beckman Named Director of Argonne's Leadership Computing Facility
Peter Beckman has been named director of the Leadership Computing Facility at Argonne National Laboratory. The Leadership Computing Facility operates the Argonne Leadership Computing Facility (ALCF), which is home to one of the world's fastest computers for open science, the Blue Gene/P, and is part of the U.S. Department of Energy's effort to provide leadership-class computing resources to the scientific community. Beckman also leads Argonne's exascale computing strategic initiative and has previously served as the ALCF's chief architect and project director. He has worked in systems software for parallel computing, operating systems and Grid computing for 20 years.
 
After receiving a Ph.D. degree in computer science from Indiana University in 1993, Beckman helped create the Extreme Computing Laboratory at Indiana University. In 1997, he joined the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratory, where he founded the ACL's Linux cluster team and organized the Extreme Linux series of workshops and activities that helped catalyze the high-performance Linux computing cluster community. Beckman has also worked in industry, founding a research laboratory in 2000 in Santa Fe sponsored by Turbolinux Inc., which developed the world's first dynamic provisioning system for large clusters and data centers. The following year, he became vice president of Turbolinux's worldwide engineering efforts, managing development offices in the US, Japan, China, Korea and Slovenia. Beckman joined Argonne in 2002. As Director of Engineering for the TeraGrid, he designed and deployed the world's most advanced Grid system for linking production HPC computing for the National Science Foundation.
Contact: Pete Beckman beckman@alcf.anl.gov
 
Maccabe Named Director of Computer Science and Mathematics Division at ORNL
Arthur Bernard ("Barney") Maccabe, a professor of computer science and chief information officer at the University of New Mexico, will direct the Computer Science and Mathematics (CSM) Division at ORNL, Associate Laboratory Director for Computing and Computational Sciences Thomas Zacharia has announced. Through advanced computing research, CSM supports national priorities in partnership with industry and academia and has programs in basic and applied research in the computational sciences, information technologies, and intelligent systems. Maccabe's appointment will be effective Jan. 5.
 
On the faculty of the University of New Mexico since 1982, Maccabe is an expert in "lightweight" system software for massively parallel computing systems. In contrast to full-featured operating systems, lightweight operating systems have minimal features, improving the ability of software to scale for use on systems employing thousands of processors. View official announcement at the following link: http://www.csm.ornl.gov/docs/maccabe_release.pdfExternal link
 
Sandia Researcher Bochev Hosted by Aachen University
Sandian Pavel Bochev was hosted by the Chair for Computational Analysis of Technical Systems at Aachen University, one of the recently established sites for the "Excellence Initiative" of the German Research Foundation. The technical meetings during this visit afforded a possibility to discuss research topics in optimization and control problems, advanced discretizations and software for numerical PDEs that are in the focus of Bochev's ASCR funded research. Bochev presented two talks based on results from ASCR-sponsored research: a student lecture on "Principles of compatible discretizations" and colloquium talk on "Stabilized finite element methods for the Stokes problem in the small time step limit."
Contact: Pavel Bochev, pbboche@sandia.gov
 
ORNL's Tony Mezzacappa Named Editor-in-Chief of Computational Science & Discovery
Non-profit scientific publisher Institute of Physics Publishing (IOP), UK, has announced the launch of a new journal titled Computational Science & Discovery. The journal will focus on scientific advances and discovery through computational science in physics, chemistry, biology and applied science. Tony Mezzacappa of ORNL serves as the Editor-in-Chief of the journal, which will publish original, peer-reviewed research across physics, chemistry, biology and applied science. The journal offers an opportunity for researchers to publish all the important components of their enterprise, together with their scientific results. The first issue is currently available online at the following link:
 
LANL's James Kamm Presents ASCR Research Results at Russian HPC Conference
In September, LANL researcher James Kamm presented the work that he and LANL colleague Mikhail Shashkov have conducted under the sponsorship of DOE's ASCR Applied Mathematics Research (AMR) Program, at the X-th International Seminar on Super-Computation and Computer Simulation at the Russian Federal Nuclear Center-All-Russia Research Institute of Experimental Physics (RFNC-VNIIEF) in Sarov, Russia. Kamm discussed the novel algorithm that he and Shashkov have developed, sponsored by DOE's ASCR Applied Mathematics Research (AMR) Program, to describe the pressure relaxation in multi-material compressible flows. This presentation has led to discussions with RFNC-VNIIEF computational physicists that may lead to possible collaborative work on numerical methods.
Contact: James Kamm, kammj@lanl.gov
 
Erickson Serves as Committee Member for National Academy of Science Investigation
David Erickson, a Senior Scientist in ORNL's Computational Earth Sciences group, served as a committee member on the National Academy of Sciences' recent study to develop a better understanding of the potential scientific and technological impact of high-end capability computing (HECC) in fields of science and engineering of interest to the federal government. The fields chosen for the study were the atmospheric sciences, astrophysics, chemical separations, and evolutionary biology. The committee found continuing demands from the four fields for more, and more powerful, high-end computing. All four areas rely on HECC to carry out simulations of systems that are too complex to analyze through observation, experiment, or theory. Three of the four areas (the exception being chemical separations) are dealing with very large amounts of data and need HECC to handle them. The report from this study "The Potential Impact of High-End Computing on Illustrative Fields of Science and Engineering," can be seen at the following link:
 

FACILITIES/INFRASTRUCTURE:

Office of Science Computers Grab Four of Top 10 Slots on Latest TOP500 List
When the 32nd edition of the closely watched list of the world's TOP500 supercomputers was announced in November, Office of Science supercomputers held four of the top 10 slots. Jaguar, the Cray XT5 petaflop/s system at ORNL, narrowly placed second to Los Alamos' Roadrunner, which had been slightly upgraded. Jaguar became only the second to break the petaflop/s barrier, posting a top performance of 1.059 petaflop/s in running the Linpack benchmark application. One petaflop/s represents one quadrillion floating point operations per second.
 
Here are the other Office of Science systems in the top 10:
  • At No. 5 is a newer version of the IBM BlueGene/P system installed at Argonne National Laboratory and it achieved 450.3 Tflop/s.
  • The No. 7 system, called Franklin, is the second new Cray XT5 system. It is installed at the NERSC Center at the Lawrence Berkeley National Laboratory and achieved 266.3 Tflop/s.
  • The No. 8 system is a Cray XT4 system installed at DOE's Oak Ridge National Lab. It achieved a Linpack performance of 205 Tflop/s.
For additional information, follow this link: www.top500.orgExternal link
 
ORNL's Jaguar Reaches Peak Speed of 1.64 Petaflop/s
A Cray XT high-performance computing system at Oak Ridge National Laboratory is the world's fastest supercomputer for science. The Cray XT, called Jaguar, has a peak performance of 1.64 petaflops (quadrillion floating point operations, or calculations) per second, incorporating a 1.382 petaflops XT5 supercomputer and a 266 teraflops XT4 system. Beginning with a 26-teraflop system in 2005, Oak Ridge embarked upon a three-year series of aggressive upgrades designed to make the machine the world's most powerful computing system. The Cray XT was upgraded to 119 teraflops in 2006 and 263 teraflops in 2007. In 2008, with approximately 182,000 AMD Opteron processing cores, the new 1.64-petaflop system is more than 60 times larger than its original predecessor.
Contact: Jayson Hines, hinesjb@ornl.gov
 
ESnet Completes Hardware Installations for Science Data Network
In November, ESnet completed hardware installations for the nation's first dynamic circuit network dedicated solely to scientific research, called the Science Data Network (SDN). SDN provides the means to dynamically provision guaranteed, high-capacity bandwidth between any two science facilities for DOE researchers to access time-sensitive applications and exchange large datasets. This on-demand hybrid packet/circuit-switched network capability is currently not available commercially, so ESnet built its own unique network to meet the demanding requirements of the research and education community. This new network consisting of multiple 10-gigabit optical circuits, each capable of transferring the equivalent of 500 hours of digital music per second, has extensive reach and enables close collaboration among DOE laboratories and research facilities across the United States, as well as scientists using international research networks in Asia, Australia, Canada, Europe, Latin America, and South America. For additional information, please see the following link:
 
NERSC Completes Quad-Core Upgrade, Franklin Nears 360 Teraflop/s
A phased upgrade to quad-core processors for the Cray XT supercomputer known as Franklin at NERSC was completed this fall. Each of Franklin's nodes was upgraded from dual-core processors to quad-core processors, and the memory doubled from 4 GB to 8 GB. The entire system now contains approximately 38,000 compute cores dedicated to scientific applications. Each new node on the system has a sustained performance peak of 9.2 gigaflops per second, and has a memory speed of 800 MHz. The upgrade will double the sustained performance of many scientific applications on Franklin, while increasing peak performance from 100 teraflop/s to almost 360 teraflop/s.
Contact: Linda Vu, LVu@lbl.gov
 
LLNL Hyperion Partnership Announced at SC08
During the keynote speech at the SC08 conference in Austin, Dell Inc. CEO Michael Dell announced Hyperion, LLNL's creative partnership that will provide a development, testing and scaling environment for new cluster technologies aimed at making them more affordable and much easier to use. When completed in March 2009, the system will be the largest testbed of its kind in the world. The initial Hyperion machine, which is being built now, will have 1,152 serve nodes with a total of 9,216 Xeon cores. The boxes will have an aggregate of over 9 TB of main memory and deliver around 100 teraflop/s. They will be linked to each other using quad data rate InfiniBand interconnects and will have over 36 GB/sec of bandwidth to a set of RAID disk arrays for storage. This testbed will provide the Hyperion collaborators with an unmatched opportunity to develop and test hardware and software technologies at unprecedented scale will also help lay the foundation for LLNL's next generation supercomputer, Sequoia, a 20-petaFLOPS system to be completed in 2011. For additional information, follow this link:
 
Argonne, University of Chicago Launch Life Science Gateway
To enable the life sciences community to fully use TeraGrid resources for computing and data management, researchers at Argonne National Laboratory and the University of Chicago/Argonne Computation Institute have developed an integrated cyber computational environment called the Open Life Science Gateway. The new gateway integrates a group of bioinformatics applications and data collections into a web portal. Biologists with no experience in Grid computing can use this scientific gateway to run their analysis programs and to compose computational workflow scripts, without facing a steep learning curve. Also included are social "gadgets" (XML files) to allow users to run bioinformatics analyses through commercial OpenSocial sites such as iGoogle Sandbox.
Contact: Gail Pieper, pieper@mcs.anl.gov
 

OUTREACH:

ITAPS Researchers Release Parallel Mesh Interface at SC08 Tutorial
The ITAPS SciDAC center presented a full day tutorial at SC08 to announce the newly developed interface for accessing parallel mesh databases. The ITAPS center has focused on the creation of interchangeable and interoperable mesh and geometry components for use in SciDAC applications. A key aspect of these components is the definition of common interfaces that multiple software implementations of mesh databases and software tools that operate on those databases use. Once an application is designed to use ITAPS interfaces, they can easily experiment with the various software tools for advanced mesh functionality that ITAPS provides, for example, mesh quality improvement, adaptive refinement loops, front-tracking and partitioning. These tools have been shown to have considerable impact on SciDAC and other DOE applications, including accelerator modeling and design, modeling tokamaks in fusion applications, groundwater simulations, and modeling next generation reactors for nuclear energy applications. The tutorial presenters were Karen Devine (SNL), Lori Diachin (LLNL), Mark Shephard (RPI) and Tim Tautges (ANL) although the entire team was involved in creating tutorial presentation materials and the hands-on exercises. More information on the ITAPS project can be found at the following link:
 
LANL Researchers Host Hydrodynamics Workshop
As part of an ongoing effort to improve computational hydrodynamics algorithms, LANL researchers Mikhail Shashkov and James Kamm, partially sponsored by DOE's ASCR Applied Mathematics Research (AMR) Program, organized a workshop on Arbitrary Lagrangian-Eulerian (ALE) methods last June. The goal of this workshop was to gain an understanding of the current outstanding issues in ALE hydrodynamics and to bring together a multi-Laboratory working group committed to applying state of the art numerical methods to address these issues and develop practical implementations. The workshop brought together over 60 scientists who participated in over 30 scientific presentations and over five hours of discussion sessions.
Contact: Mikhail Shashkov, shahskov@lanl.gov
 
Cray Workshop Promotes Supercomputing Skills
More than two dozen computational scientists gathered at ORNL in mid-October to hone their Cray XT4 and XT5 supercomputer skills and share tips and experiences during the 2008 Cray XT Quad-core Workshop, held October 15-17 at ORNL. The workshop gave scientists an opportunity to meet with experts from Oak Ridge as well as from supercomputer maker Cray Inc. and chip maker Advanced Micro Devices Inc.
 
The workshop featured hands-on sessions to help users make the most of ORNL's two Cray XT supercomputers. The event also featured talks on a range of issues important to users, including the use of system tools and libraries, chip architecture, system configuration, and optimizing scientific applications for the Cray supercomputers. The workshop video and presentation slides will soon be posted on the NCCS website at the following link:
 
Oak Ridge LCF Experts Share Their Views in HPCwire
Director Jim Hack and Science Director Doug Kothe of ORNL’s Leadership Computing Facility recently shared their perspectives with the computational science community through interviews published in the online magazine HPCwire. Hack outlined the future of computational climate science and ORNL’s climate science initiative in particular, while Kothe reviewed the range of breakthrough research now possible on petascale computing systems such as the lab’s Jaguar supercomputer.
 
During the course of his interview, Hack noted that while climate science has so far been driven by the curiosity of researchers, in the future it will be dominated by the needs of resource managers who will need to know, for instance, if specific regions will be subject to increasing droughts or increasingly severe weather. Kothe discussed the exciting scientific results he expects to see as users take advantage of Jaguar, which is, at a peak performance of 1.64 quadrillion calculations a second (1.64 petaflops), the world's most powerful system for open scientific research. The whole question and answer session can be found at the following link:
 
LBNL Staff Lend Expertise to High School Career Day Program
Applied mathematician Ann Almgren of LBNL's Center for Computational Sciences and Engineering and cyber security expert Jim Mellander of the IT Division were among a group of about 40 professionals talking about their careers to students at Albany High School near Berkeley. During the annual program, each speaker gives a presentation in three separate sessions, reaching about 100 students. Ben Feinberg of the Advanced Light Source also participated, and Berkeley Lab participation was organized by Jon Bashor, whose son attends the school.
Contact: Jon Bashor, jbashor@lbl.gov

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:50 AM