ASCR Monthly Computing News Report - May 2011

The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.
In this issue:


NERSC, OLCF Simulations of Proton Dripping Test a Basic Force of Nature
Like gravity, the strong interaction is a fundamental force of nature. It is the essential "glue" that holds atomic nuclei - composed of protons and neutrons - together to form atoms, the building blocks of nearly all the visible matter in the universe. Despite its prevalence in nature, researchers are still searching for the precise laws that govern the strong force. However, the recent discovery in laboratory experiments of an extremely exotic, short-lived nucleus called fluorine-14 may indicate that scientists are gaining a better grasp of these rules.
Fluorine-14 comprises nine protons and five neutrons. It exists for a tiny fraction of a second before a proton "drips" off, leaving an oxygen-13 nucleus behind. A team of researchers led by James Vary, a professor of physics at Iowa State University, first predicted the properties of fluorine-14 with the help of scientists in Lawrence Berkeley National Laboratory's (Berkeley Lab's) Computational Research Division, as well as supercomputers at the National Energy Research Scientific Computing Center (NERSC) and the Oak Ridge Leadership Computing Facility (OLCF). These fundamental predictions served as motivations for experiments conducted by Vladilen Goldberg's team at Texas A&M's Cyclotron Institute, which recently achieved the first sightings of fluorine-14.
ALCF's Blue Gene/P Enables New Insights into Concrete's Flow Properties
Flow simulations of thousands of irregularly shaped particles on the Argonne Leadership Computing Facility's (ALCF) Blue Gene/P supercomputer are enabling new insights into how to measure and control flow properties of large-particle dense suspensions like concrete that can't be accurately measured in industrial settings. This field of study - known as rheology - could provide a better understanding of concrete's flow properties to help ensure its optimum performance and reduce costs.
Measuring the true rheological properties of fresh concrete is quite a challenge. Laboratory devices measure the way a suspension flows in response to applied forces can only approximate rheological properties of suspensions. Without detailed knowledge of the flow, scientists can't obtain fundamental parameters such as viscosity and yield stress (resistance to initiating flow), which affect how easily concrete can be poured. Computing the flow allows researchers to correctly interpret empirical measurements.
Through their simulations on the Blue Gene/P at the ALCF, researchers led by William George from the National Institute of Standards and Technology (NIST) have gained fundamental new insights into the yield stress of dense suspensions. The simulations indicate that particle contacts are an important factor in controlling the onset of flow in dense suspensions. Such interactions can be strongly influenced by the shape of the aggregates and lead to jamming that effects placement of concrete in forms. These results have been validated against physical experiments with excellent agreement.
Contact: William George william.george@nist.gov
PNNL Publishing Multiple Papers on Scientific Data Management Center
Researchers working on the SciDAC-funded Scientific Data Management Center have four peer-reviewed papers that have been accepted for publication this year, starting in July. The publications include a paper in the International Journal of Computers and Their Applications, Special Issue on Scientific Workflows, Provenance and Their Applications; a paper that will appear in the Proceedings of the 23rd Scientific and Statistical Database Management Conference; and two papers that will be published in the Fifth IEEE International Workshop on Scientific Workflows, published within the Proceedings of the Seventh IEEE World Conference on Services.
Contact: Mary Ann Wuennecke, maryanne.wuennecke@pnl.gov
Checkpointing in Petascale Electronic Structure Codes
Checkpointing is a technique for saving the current state of a computer application for later use, i.e., checking it at a certain point, usually for restarting the calculation in the event of a failure. Without checkpointing, if a large computer application is unexpectedly stopped before it finishes, the computing time used is largely wasted. One important research area that could benefit from checkpointing is the use of electronic structure codes to study materials. Checkpointing such an electronic structure code must address a number of physical quantities, (e.g. atomic positions, atom type, etc.), and a lot of data (e.g. charge density, wave functions, etc).
For example, the checkpoint files from one INCITE electronic structure project, "Probing the Non-scalable Nano Regime in Catalytic Nanoparticles with Electronic Structure Calculations," are about 1 terabyte. Writing this immense amount of data into files quickly requires the use of a supercomputing operation called parallel input/output (I/O). Researchers at Argonne's Leadership Computing Facility (ALCF) and Mathematics and Computer Science Division collaborated with the developer of one electronic structure code (known as GPAW) to add parallel I/O to one part of the code. Their work permits the calculation of large systems within the 12-hour time constraints of the queue limit on Intrepid, the ALCF's Blue Gene/P supercomputer. A benchmark calculation has demonstrated that the parallel I/O implementation can successfully scale on eight racks of Intrepid and record the necessary checkpointing data, something previous methods could not do. Now that the approach has proven successful, researchers will integrate this capability into the main trunk of the GPAW code.
Contact: Nichols A. Romero naromero@alcf.anl.gov, Kevin Harms harms@alcf.anl.gov, or Rob Latham robl@mcs.anl.gov


James Demmel
Berkeley's James Demmel Elected to National Academy of Sciences
James W. Demmel, a professor of computer science and mathematics at the University of California, Berkeley, who has a joint appointment in Lawrence Berkeley National Laboratory's Computational Research Division, is one of 72 new members elected to the National Academy of Sciences (NAS). Election to the NAS recognizes distinguished and continuing achievements in original research. The May 3 election at the NAS annual meeting brings the total number of active members to 2,113.
Ten Questions for Kate Evans, a Climate Scientist at ORNL
Climate scientist Kate Evans works in Oak Ridge National Laboratory's (ORNL's) Computer Science and Mathematics Division on a variety of projects from using supercomputers to study the movements of ice sheets to developing a model to explore the impacts of storms on ocean currents. The DOE Energy Blog recently interviewed Kate about her work advancing climate simulations and modeling and why an Indiana storm sparked her interest in Earth sciences.


Petaflops Power to NERSC!
NERSC has accepted its first petaflops supercomputer - a 153,216-core Cray XE6. The new flagship system is called "Hopper" in honor of American computer scientist Grace Hopper, who was a pioneer in the field of software development and programming languages. The system is currently the fifth most powerful supercomputer in the world and the second most powerful in the United States, according to the latest edition of the TOP500 list. The system is now available to NERSC's 4,000 users.
ESnet Blog: Shaping Hybrid Networks to Come
As the next generation of packet-optical integration permeates multi-layer Internet architecture as well as telecom equipment designs, valuable lessons can be drawn from hybrid network concepts championed and operationalized by research and education (R&E) networks. In fact, the ESnet4 hybrid architecture, conceived in 2006 and made operational in 2008, consists of separate physical wavelengths for IP-routed and dynamic virtual-circuit services. IEEE Communications Magazine's May 2011 special issue on hybrid networking includes three ESnet co-authored articles.


Three Labs Open Computing Resources to Colleagues in Japan
Computational power from facilities at three U.S. Department of Energy national laboratories is aiding Japanese physicists in their quest to understand the interactions that lie at the heart of matter. From now until the end of 2011, while computing facilities in eastern Japan face continuing electricity shortages due to the March earthquake and tsunami, a percentage of the computing power at Brookhaven National Laboratory on Long Island, Fermi National Accelerator Laboratory near Chicago, and Thomas Jefferson National Accelerator Facility in Virginia will be made available to the Japanese Lattice Quantum Chromodynamics (QCD) community.
Lattice QCD is a computational technique used to study the interactions of quarks and gluons, the basic building blocks of 99 percent of visible matter. Such calculations require enormous computing power, and as a result, groups of QCD physicists worldwide have built dedicated high-performance computing facilities designed specifically for lattice QCD calculations. In eastern Japan following the devastating earthquake in March 2011, such facilities are turned off during periods of high electricity usage to ensure that power is available for essential activities.
"We appreciate the support from the U.S. QCD community," said University of Tsukuba Vice President Akira Ukawa, spokesperson of the Japanese lattice QCD community. "The sharing of resources will not only be instrumental to continue research in Japan through the current crisis, but will also mark a significant step in strengthening the international collaboration for progress in our field."
Sandia Sponsors Student Team in Year-Long Computer Science Clinic
As part of the educational outreach mission of the DOE SciDAC Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute, Sandia National Laboratories sponsored a Computer Science Clinic team of students from Harvey Mudd College in Southern California. Clinic teams consist of four college seniors who work together for an entire school year on research problems designated by the sponsoring institutions.
Sandia's team investigated the feasibility of two-dimensional matrix partitioning for a wide range of problems. They Harvey Mudd College Computer Science clinic team implemented two-dimensional Cartesian and recursive bisection methods in the Trilinos solver framework's Isorropia matrix-partitioning package, and evaluated the methods on parallel computers at NERSC. This clinic project provided the students with their first exposure to parallel computing, large-scale software projects, and the national laboratories. Two of the students will pursue advanced degrees at San Diego State University and UC Santa Barbara, while two have accepted positions with Microsoft.
Contact: Erik Boman, egboman@sandia.gov; or Karen Devine, kddevin@sandia.gov
Berkeley Lab, Oak Ridge Staff Add Expertise to Annual Cray User Meeting
The 53rd Cray User Group (CUG) meeting, " Golden Nuggets of Discovery, External link" was held May 23-26 in Fairbanks, Alaska, hosted by the Arctic Region Supercomputing Center. At the meeting, chaired by CUG president Nick Cardo of NERSC, staff from Berkeley Lab and the Oak Ridge Leadership Computing Facility shared their expertise in both General and Parallel Technical Sessions.
Berkeley staff from NERSC and the Computational Research Division gave 11 talks:
  • Katie Antypas: Transitioning Applications from the Franklin XT4 System with 4 Cores per Node to the Hopper XE6 System with 24 Cores per Node
  • Katie Antypas: Performance of Atomic and Molecular Collision Codes on the Cray XE6
  • Julian Borrill: Cosmic Microwave Background Data Analysis at the Petascale and Beyond
  • Tina Butler: DVS, GPFS and External Lustre at NERSC - How It's Working on Hopper
  • Jonathan Carter: The Hopper System: How the Largest XE6 in the World Went From Requirements to Reality
  • Kirsten Fagnan: Acceleration of Porous Media Simulations on the Cray XE6 Platform
  • Yun (Helen) He: Benchmark Performance of Different Compilers on a Cray XE6
  • Jim Mellander: High Performance Network Intrusion Detection in the HPC Environment
  • Praveen Narayanan and Alice Koniges:Performance Characterization and Implications for Magnetic Fusion Co-Design Applications
  • Nicholas Wright: The NERSC-Cray Center of Excellence: Performance Optimization for the Multicore Era
  • Zhengji Zhao: Performance of Density Functional Theory codes on Cray XE6
Staff from Oak Ridge presented 15 talks:
  • Buddy Bland: Titan: ORNL's New System for Scientific Computing
  • Robert Whitten and Ashley Barker: Building an Electronic Knowledge Base to Aid in Support of Jaguar
  • Raghul Gunasekaran: Real-Time System Log Monitoring/Analytics Framework
  • Byung H. Park: User Application Monitoring through Assessment of Abnormal Behavior Recorded in RAS Logs
  • Rebecca Hartman-Baker: Optimizing Nuclear Physics Codes on the XT5
  • David Dillow: I/O Congestion Avoidance via Routing and Object Placement
  • Don Maxwell (with Frank Indiviglio of the National Climate-Computing Research Center, NCRC): The NCRC Grid Scheduling Environment
  • Bronson Messer: Case Studies From the OLCF Center for Application Acceleration Readiness: The Importance of Realizing Hierarchical Parallelism in the Hybrid Multicore Era
  • Terry Jones: Providing Runtime Clock Synchronization with Minimal Node-To-Node Time Deviation on XT4s and XT5s
  • Jason Hill: Determining the Health of Lustre Filesystems at Scale
  • Richard Graham: Cheetah: A Scalable Hierarchical Collective Operation Framework
  • Richard Graham: A Programming Environment for Heterogeneous Multi-Core Computer Systems
  • Robert Whitten: Data Systems Modernization (DSM) Project: Development, Deployment, and Direction
  • Markus Eisenbach: Future Proofing WL-LSMS: Preparing for First Principles Thermodynamics Calculations on Accelerator and Multicore Architectures
  • Xingfu Wu: Parallel Finite Element Earthquake Rupture Simulations on Quad- and Hex-Core Cray XT Systems
DOE Recognizes ORNL's Outstanding Mentors
Several researchers from ORNL's Computer Science and Mathematics (CSM) division and the Oak Ridge Leadership Computing Facility (OLCF) were selected by DOE as winners of the Office of Science's Outstanding Mentor Awards. The winners were: Ralf Deiterding, George Ostrouchov, John Cobb, and Pat Worley of CSM. Additionally, Bobby Whitten and Jim Rogers were winners from the OLCF.
The DOE Outstanding Mentor Award program, coordinated through the Office of Science Workforce Development for Teachers and Scientists, began in 2002 as an effort to establish a culture that values mentorship within the DOE national laboratories.
Oak Ridge High School Student Wins Scholarship
Gloria D'Azevedo, a senior at Oak Ridge High School, won first place in the Tennessee Junior Science & Humanities Symposium (JSHS) for her research on improving elimination orderings for tree decompositions. She was awarded a $2000 college scholarship and an all-expense paid trip to the national JSHS, where she will compete for additional scholarships. Gloria's research was conducted at Oak Ridge National Laboratory with Blair D. Sullivan and Chris Groer as part of the DOE ASCR Applied Mathematics project "Scalable Graph Decompositions & Algorithms to Support the Analysis of Petascale Data." Gloria showed computationally that incorporating graph parameters such as the number of second neighbors of a vertex into traditional degree- and fill-based algorithms for choosing an elimination ordering can lead to significantly lower tree widths. The impact of tree width on the complexity of tree decomposition based graph analysis algorithms is exponential, so this new idea for improving orderings could lead to significant speed-up.
ESnet Staff Add Expertise to Europe's Largest Networking Conference
Among the more than 500 participants at the 2011 TERENAExternal link Networking Conference (named for the Trans-European Research and Education Networking Association) were three engineers from ESnet, who presented a DOE perspective to the audience of decision makers, networking specialists and managers from all major European networking and research organizations, universities, worldwide sister institutions and industry representatives. TERENA was held May 16-19 in Prague, Czech Republic. Presenting from ESnet were:
  • Bill Johnston: Motivation, Design, Deployment and Evolution of a Guaranteed Bandwidth Network Service
  • Steve Cotter, ESnet head, on "Fighting a Culture of 'Bad is Good Enough'"External link
  • Inder Monga on "Network Service Interface: Concepts and Architecture"
Last modified: 3/18/2013 10:12:30 AM