2011

July

ASCR Monthly Computing News Report - July 2011



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.
 
In this issue:
 
 
 
 

RESEARCH NEWS:

Special Report Highlights Research at America’s Leadership Computing Facilities
A special report highlights the accomplishments of researchers running large, complex, and often unprecedented simulations on Department of Energy Office of Science supercomputers. The research community gains access to these powerful high-performance computing systems through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, which is jointly managed by the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory and the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory. The 64-page report, INCITE in ReviewExternal link, is available online.
 
Since 2003, INCITE has promoted transformational advances in science and technology through large allocations of computer time, supporting resources, and data storage. From modeling hurricanes to simulating combustion instabilities in power plant turbines and vehicle engines, INCITE projects aim to accelerate breakthroughs in fields in which major advancements would not be probable or even possible without supercomputing.
 
With a modest start by today's standards, in 2004 three INCITE projects received a total of nearly 5 million processor hours from Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC). Since 2008 the program has focused on leadership computing facilities, from which researchers can obtain the largest single-award time allocations available on powerful computing systems, including the OLCF's Cray XT5 (Jaguar) with 224,256 processing cores yielding a peak performance of 2.33 thousand trillion calculations each second and the ALCF's IBM Blue Gene/P (Intrepid) with 163,840 processing cores yielding a peak performance of 557 trillion calculations per second.
 
For 2011, 57 INCITE awardees received a total of 1.7 billion processor hours. The allocations averaged 27 million hours, with one project receiving more than 110 million hours. From INCITE's inception through the end of 2011, researchers from academia, government laboratories, and industry will have been allotted more than 4.5 billion processor hours to speed innovations and discoveries.
 
World-Record Algorithm Calculates Over Three Trillion Particles in ~11 Minutes
Scientists at Germany's Juelich Supercomputing Centre (JSC) have performed a world-record simulation of the N-body problem, leveraging software developed by Argonne National Laboratory. Ivo Kabadshow and Holger Dachsel of JSC developed a state-of-the-art algorithm and code for applying the fast multipole method (FMM) to the N-body problem, but were limited in their ability to use the world's largest Blue Gene/P system (JUGENE) because of the performance of one-sided communication software the code depended upon.
 
A team at Argonne National Laboratory led by Assistant Computational Scientist Jeff Hammond solved this problem by developing a new communication library for Blue Gene/P that was faster and more scalable than existing libraries. This software, known as A1 ("Argonne 1-sided") was designed from scratch for leadership supercomputer architectures such as Blue Gene/P and was implemented in 2010. Over the next six months, A1 was integrated into the JSC FMM code and its memory use and scalability optimized, since the library needed to scale to all 294,912 cores of JUGENE. Because of the extreme scale of this machine, any operation that required time or memory proportional to the number of cores would become a bottleneck.
 
The results of the collaboration between JSC and Argonne are impressive - the N-body problem for more than three trillion particles can be solved in only eleven minutes! The long-term impact of the efforts at Argonne and JSC are more powerful simulation tools for the N-body problem, which is ubiquitous in fields such as astrophysics, biology and chemistry, and more powerful communication software for architectures like Blue Gene/P. With the arrival of the Mira Blue Gene/Q system at Argonne in 2012 - a machine that will have almost three times the number of cores as JUGENE - the scaling optimizations in the JSC FMM code and A1 prepare the way for unprecedented N-body simulation in the near future. For more information, contact Jeff Hammond (jhammond@alcf.anl.gov), and visit the JSC websiteExternal link.
 
Researchers Use Jaguar to Shed Light on Three-Body Force
The nucleus of an atom, like most everything else, is more complicated than we first thought. Just how much more complicated is the subject of a Petascale Early Science project led by Oak Ridge National Laboratory's (ORNL's) David Dean. According to findings outlined by Dean and his colleagues in the May 20, 2011, edition of the journal Physical Review Letters, researchers who want to understand how and why a nucleus hangs together as it does and disintegrates when and how it does have a very tough job ahead of them.
 
Specifically, they must take into account the complex nuclear interactions known as the three-body force. Nuclear theory to this point has assumed that the two-body force is sufficient to explain the workings of a nucleus. In other words, the half-life or decay path of an unstable nucleus was to be understood through the combined interactions of pairs of protons and neutrons within. Dean's team, however, determined that the two-body force is not enough; researchers must also tackle the far more difficult challenge of calculating combinations of three particles at a time (three protons, three neutrons, or two of one and one of the other). This approach yields results that are both different from and more accurate than those of the two-body force.
 
"With Jaguar we are able to do ab initio calculations, using three-body forces, of the half-life for carbon-14," noted team member and ORNL computational physicist Hai Ah Nam. "It's an observable that is sensitive to the three-body force. This is the first time that we've demonstrated at this large scale how the three-body force contributes.".
 
Annual Competition at SciDAC Meeting Selects Top Scientific Visualizations
Each year, about 400 researchers supported by DOE’s SciDAC Program (Scientific Discovery through Advanced Computing) meet to talk about their results, discuss collaborations – and view some of the most compelling scientific visualizations. At this year’s July 10-14 meeting in Denver, attendees voted for their favorite visualizations, awarding Peoples’ Choice awards to 10 visualizations illustrating topics ranging from blood flow to earthquakes to supernovae to galaxies. Each winning visualization wins an OASCR Award, named for the sponsoring Office of Advanced Scientific Computing Research.
 
Two juried awards were also presented. In the Information Presentation Category, the award went to a visualization of “Shock Wave/Turbulent Boundary Layer Interactions” by Michael A. Matheson, Oak Ridge National Laboratory; Allan D. Grosvenor of Ramgen Power Systems, LLC; and Alexander A. Zheltovodov of the Khristianovich Institute of Theoretical and Applied Mechanics. The juried award in the Visual Aesthetics Category went to a movie of “Magnetic Field Outflows from Active Galactic Nuclei” by David Pugmire, Oak Ridge; and Paul Sutter, Paul Ricker, Hsiang-Yi Yang and Gary Foreman, all of the University of Illinois at Urbana-Champaign. For more information on all the winning visualizations, visit the
 
Berkeley Lab Researchers Receive Prizes, Present Results at ICIAM 2011
Berkeley Lab researchers were prominent on the program of the 7th International Congress on Industrial and Applied Mathematics (ICIAM 2011)External link held July 18–22, in Vancouver, B.C. Canada. In all, 26 LBNL staff members gave talks, presented papers and posters and organized technical sessions and two of Berkeley Lab’s leading mathematicians received special prizes for their research contributions.
 

PEOPLE:

LLNL’s David L. Brown Named New Director of LBNL’s Computational Research Division
David L. BrownDavid Brown, currently the Deputy Associate Director for Science and Technology in the Computation Directorate at Lawrence Livermore National Laboratory (LLNL), has been named as the new director for the Computational Research Division (CRD) at Lawrence Berkeley National Laboratory (LBNL). Brown will join Berkeley Lab on Aug. 30.
 
In his new position, Brown will provide scientific leadership for CRD research and development programs in mathematics, computer science and computational science, and serve as chief spokesperson for CRD in interactions with external agencies, including the Department of Energy. Brown’s research expertise and interests lie in the development and analysis of algorithms for the solution of partial differential equations (PDEs). In particular, his research has focused on adaptive composite overlapping grid techniques for solving PDEs in complex moving geometries and in the analysis of difference approximations for PDEs. At LLNL, he led the highly successful Overture project, which in 2001 was named one of the 100 "most important discoveries in the past 25 years" by the DOE Office of Science.
 
Berkeley Lab Team Wins Best Paper Award at International Symposium
Khaled Ibrahim, Steven Hofmeyr and Costin Iancu of Berkeley Lab’s Computational Research Division received the Best Paper Award at CCGRID'11, the IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing 2011. In their paper on “Characterizing the Performance of Parallel Applications on Multi-Socket Virtual Machines,” the authors described how virtualization allows easier resource management and consolidation and is a key enabling technology for cloud and green computing. “High performance computing workloads typically pose many challenges to virtualized environments because they stress the memory system and network resources beyond the limits of commercial workloads,” they wrote. To address these problems, the researchers experimentally characterized the main causes of performance degradation of virtualization technologies on non-uniform memory access (NUMA) systems. They showed how to significantly improve the performance on these environments using a specialized runtime mechanism that facilitates the adherence to memory locality.
Contact: Alex Tartakovsky (alexandre.tartakovsky@pnnl.gov)
 
CSGF Fellow Solomonik Earns Distinguished Paper Award at Euro-Par 2011
The “Distinguished Paper” award has been given to Edgar Solomonik, a Computer Science Ph.D. student at UC Berkeley, and Prof. Jim Demmel, who holds a joint appointment in Berkeley Lab’s Scientific Computing Group, for their paper to be presented at the Euro-Par 2011 conferenceExternal link to be held Aug.29–Sept. 1in Bordeaux, France. SolomonikExternal link is also a fellow in DOE’s Computational Science Graduate Fellowship (CSGF) Program. The paper, entitled “Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms,”External link is based on work Solomonik did at UC Berkeley. During a CSGF practicum at Argonne this summer, Solomonik is working with ALCF Catalyst Jeff Hammond to extend and apply these algorithmic ideas to accelerate tensor contractions performed in quantum chemistry electronic structure calculations.
 
According to Demmel, the most expensive operation any computer performs, measured in time or energy, is “communication,” i.e., moving data, either between levels of a memory hierarchy (e.g., cache, main memory and disk), or between processors over a network. This paper is part of ongoing research to develop so-called “communication-avoiding” algorithms that provably communicate as little data as possible. Recently proven lower bounds on communication for linear algebra algorithms are much lower than what current algorithms actually communicate, suggesting that new, much faster and more energy efficient algorithms might be found that do attain these lower bounds. In this paper, for the first time, new algorithms for the most basic dense linear algebra algorithms (matrix multiplication and solving linear equations) are derived that do attain these lower bounds, moving large factors less data than the standard algorithms. Significant speedups were demonstrated on an IBM BG/P.
 
Another paper on this work has been accepted for publication and presentation at the SC11 conference to be held in Seattle in November.
Contact: Edgar Solomonik, solomonik@berkeley.edu
 

FACILITIES/INFRASTRUCTURE:

The Speed of Science: ESnet Lays Foundation for 100 Gpbs
ANI Phase I Network MapIn July, the Energy Sciences Network (ESnet) announced a major step toward creating one of the world's fastest and most advanced scientific networks to accelerate U.S. competitiveness in science and technology. Known as the Advanced Networking Initiative (ANI), the effort represents a $62 million multi-year investment by the DOE Office of Science in next-generation networking technology.
 
Upgrade to ADIOS Further Maximizes Computation Time on Supercomputers
Researchers at ORNL are working to say goodbye to input/output (I/O) problems with their most recent upgrade of the Adaptive Input/Output System (ADIOS). ADIOS grew out of a 2008 collaboration between the OLCF and researchers from academia, industry, and national laboratories. Their goal was a system to get information in and out of a supercomputer efficiently.
 
“The measurement of success for us has always been ‘what percentage of time do you spend in I/O?’” said OLCF computational scientist Scott Klasky. ADIOS was inspired by Klasky’s work at the Princeton Plasma Physics Laboratory, where he noticed up to 30 percent of researchers’ computational time was spent reading and writing analysis files.
 
The open-source middleware is designed to help researchers maximize their allocations on leadership-class computing resources from wherever they may be. In essence, it creates more time for research by minimizing the time needed to read and write data to files, even if researchers are sending those files from thousands of miles away.
 
Ray Grout, a researcher at the National Renewable Energy Laboratory, was one of the first people to test the latest update, using the S3D combustion code to study turbulent flows reacting to one another. Grout noted a huge increase in reading performance. For more information on ADIOS or to download the source, please visit the following link:
 
NERSC’s Data Tracking System Increases Scientific Productivity
New supercomputers and networks are contributing to record levels of scientific productivity. In fact, every new system installed at NERSC over the last 10 years has generated about 50 percent more data than its predecessor. To effectively meet the increasing scientific demand for storage systems and services, the center’s staff must first understand how data moves within the facility. Until recently, the process of obtaining these insights was extremely tedious because the statistics came from multiple sources, including network router statistics, client and server transfer logs, storage and accounting reports—all saved as very large, independently formatted text files.
 
Now a dynamic database created by the NERSC Storage Systems Group continually collects statistics from all of these sources and compiles them into a single, searchable repository. The system also automatically generates daily email reports and graphs that illustrate how data moves in and out of the facility’s HPSS archival storage system, which is the largest repository of scientific data at the center.
 

OUTREACH & EDUCATION:

Fifth SciDAC Tutorials Day Draws 70 Participants in Denver
On July 15, the fifth SciDAC Tutorials Day was held on the campus of the University of Colorado at Denver. Held on the day following the main SciDAC meeting, Tutorials Day provides open and free tutorials on a wide range of subjects in scientific computing. Organized by Andrew Uselton and David Skinner of the National Energy Research Scientific Computing CenterNERSC, the tutorials focus on bringing the benefits of DOE's investments in SciDAC to new researchers in academia and industry.
 
The SciDAC tutorials leverage the expertise of SciDAC researchers gathered at the main meeting by asking them to stay an extra day and present HPC, domain science, and applied math tutorials to a mostly local audience of students and researchers from nearby universities and industries. More than 70 students attended seven tutorials with twelve presenters.
 
"Very useful, great work!" commented UC Denver Prof. Andrew Knyazev.
 
ORNL Staff Contribute to Award-Winning Educational Curriculum
ORNL staff members contributed knowledge and support to a new geology unit of the JASON Project which recently earned a "CODiE Award" as the nation's best science or health curriculum. Operation: Tectonic Fury helps middle school students solve geological mysteries by researching and analyzing Earth's past, present, and future through multimedia activities. The JASON Project, a nonprofit organization sponsored by the National Geographic Society, physically and virtually connects students and teachers with researchers to provide enriching scientific experiences. Issued by the Software and Information Industry Association, the CODiE Awards are annual, peer-reviewed prizes honoring excellence in game development, software programming, and online education.
 
The judges for this highly competitive prize are experts from industry and academia who examined the online content for Operation: Tectonic Fury, including text, photos, technical art, games, videos, tests, and classroom handouts. Martin Keller, ORNL associate laboratory director for biological and environmental sciences, nominated Operation: Tectonic Fury for a CODiE. Virginia Dale, an ORNL corporate fellow in the Landscape Ecology and Regional Analysis Division, led one of the project's missions in which students sampled soil under switchgrass and analyzed the samples under the guidance of Deanne Bruce and Charles Garten, both of ORNL's Environmental Science Division. Research scientists Michael Hilliard (Energy and Transportation Science Division) and Alexandre Sorokine (Geophysical Information Science and Technology group) displayed results about the sustainability of energy crops via the 30 by 8 foot EVEREST PowerWall at ORNL. Robert Whitten, user support specialist at the OLCF, showed students how supercomputers help expand knowledge. Due to ORNL's contributions, farming practices and planting of energy crops were included as geologic influences in Operation: Tectonic Fury.
 
TechWomen from Africa, Middle East Visit Berkeley Lab
The TechWomenExternal link program is a Department of State initiative to bring women who are technical leaders from the Middle East and North Africa for a month of mentoring and exchange of ideas with Silicon Valley companies. Participants from Algeria, Egypt, Jordan, Lebanon, Morocco, and Palestine were in the San Francisco Bay Area for the month of June and visited Berkeley Lab on June 24. Taghrid Samak, a postdoctoral fellow in Berkeley Lab’s Computational Research Division and a native of Alexandria, Egypt, was volunteering as a cultural mentor for three of the participants.
 
Samak was invited to join the TechWomen group in Washington DC for the 4th of July weekend, where they attended meetings at the Department of State. Secretary of State Hillary Rodham Clinton spoke at the closing luncheonExternal link on July 6, announcing that next year TechWomen would be complemented by TechGirls, which will bring teenage girls to the U.S. for a month of educational activities.
 
Berkeley Lab Hosts International AstroComputing Summer School
From July 18-29, Daniel Kasen of UC Berkeley and Berkeley Lab’s Nuclear Science Division and Peter Nugent of LBNL’s Computational Research Division, NERSC, and Berkeley Lab’s Computational Cosmology Center hosted the 2011 University of California High-Performance Astro-Computing Center (UC-HIPACC) International AstroComputing Summer School on Computational Explosive AstrophysicsExternal link.
 
This year’s summer school focused on computational explosive astrophysics, including the modeling of core collapse and thermonuclear supernovae, gamma-ray bursts, neutron star mergers, and other energetic transients. Lectures included instruction in the physics and numerical modeling of multi-dimensional hydrodynamics, general relativity, radiation transport, nuclear reaction networks, neutrino physics, and equations of state. Afternoon workshops guided students in running and visualizing simulations on supercomputers using codes such as FLASH, CASTRO, GR1D, and modules for nuclear burning and radiation transport. All students were given accounts and computing time at NERSC and had access to the codes and test problems in order to gain hands-on experience running simulations at a leading supercomputing facility.
 
ESnet Staff Contribute to ESCC/Internet2 Joint Techs Conference
The Summer 2011 Joint Techs ConferenceExternal link, presented by the ESnet Coordinating Committee (ESCC) and Internet2, was held July 10 – 14 at the University of Alaska in Fairbanks. At the twice-yearly meeting held in collaboration with Internet2, ESnet served as organizers and gave presentations. Michael Sinatra of ESnet co-chaired the IPv6 focus area; Brian Tierney co-chaired Emerging Technologies; and Joe Metzger chaired Performance/Measurement. ESnet staff gave the following presentations:
  • Steve Cotter: “ESnet Update”
  • Eli Dart, Eric Pouyoul, and Brian Tierney: “Building a Data Transfer Node”
  • Inder Monga (with Samrat Ganguly, NEC Corporation): “Openflow with OSCARS: Bridging the Gap between Campus, Data Centers and the WAN”
  • Taghrid Samak: “Scalable Network Measurement Analysis With Hadoop”
  • Michael Sinatra (and four others): “Panel: IPv6 Campus Deployments”
  • Brian Tierney and Phil DeMar: “ESnet Distributed Help Desk”
 
ALCF’s Martin Leads Panel Discussion on Best Practices for Working with Industry
David Martin, Manager, User Services and Outreach for the Argonne Leadership Computing Facility (ALCF), led a panel discussion on “Working with Industry: Best Practices for Industry Involvement” at the National User Facility Organization (NUFO) Annual Meeting. The NUFO meeting, held at the Stanford Linear Accelerator Center (SLAC) on June 27-29, explored topics ranging from best practices to various forms of outreach to effective communication with universities and other organizations representing users, user facilities, and science in general.
 
Two panels discussed best practices and provided suggestions for improved industry involvement. Representatives from Xradia, Eli Lilly and Company, and UOP LLC comprised one industry panel; the other joint industry/facility panel included representatives from the ALCF and Advanced Photon Source at Argonne National Laboratory, and the Spallation Neutron Source and High Temperature Materials Laboratory at Oak Ridge National Laboratory.
Contact: David Martin, dem@alcf.anl.gov
 
Oak Ridge Staff Members Participate in Extreme Computing Workshop
Five Oak Ridge National Laboratory staffers presented at the Institute for Nuclear Theory’s (INT’s) workshop, “The Nuclear Physics/Applied Math/Computer Science Interface,” held June 27 through July 1 at the University of Washington. The workshop brought together computational scientists, applied mathematicians, and nuclear physicists for five days of presentations, discussions, and work groups.
 
Oak Ridge staff spoke in the areas of performance, solving algebraic systems, and architectures/programming languages and graphics processing units:
  • Richard Graham: “Preparing Applications for Ultrascale Computing: A Tools Perspective.”
  • Scott Klasky: “In Situ Data Processing for Extreme Scale Computing.”
  • Tony Mezzacappa: “Supernova Simulations.”
  • Jeffrey Vetter: “Large-Scale Heterogeneous Computing.”
  • Bronson Messer: “Producing Science at the Top of the Top500 – The Challenge of Extreme Scalability and Hybrid-Mult icore Computing.”
“Those at the workshop are consistently among those making capability use of leadership computational resources,” said Messer, a computational astrophysicist. “The meetings allowed practitioners of fundamental science on big DOE machines to share what we all do the same and what common problems we have. If we can suss out those problems, it will benefit everyone who does large-scale computing.”
 
The INT receives Department of Energy Office of Science funding to run both short workshops and months-long programs.

 

 

 

 

 

 

Last modified: 3/18/2013 10:12:31 AM