2011

November

ASCR Monthly Computing News Report - November 2011



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.
 
 
 
 
 

SPECIAL SECTION:
DOE Labs Again Demonstrate Leadership at SC11 Conference

Argonne’s Mathematics and Computer Science Division Plays Major Role in SC11
Researchers in Argonne’s Mathematics and Computer Science Division authored or coauthored 12 technical papers presented at SC11. With an acceptance rate of 21 percent, SC11 is one of the most competitive technical conferences in high-performance computing. The papers and authors are as follows:
  • “AME: An Anyscale Many-Task Computing Engine,” Z. Zhang, Daniel S. Katz, M. Ripeanu, Michael Wilde, and Ian Foster
  • “Simplified Parallel Domain Traversal,”
Wesley Kendall, Jingyuan Wang, Melissa Allen, Tom Peterka, Jian Huang, and David Erickson
  • “ISABELA-QA: Query-Driven Data Analytics over ISABELA-Compressed Extreme-Scale Scientific Data,” Sriram Lakshminarasimhan, Jonathan Jenkins, Robert Latham, Robert Ross, Nagiza F. Samatova, Isha Arkatkar, Zhenhuan Gong, Hemanth Kolla, Jackie Chen, Seung-Hoe Ku, C.S. Chang, Stephane Ethier, and Scott Klasky
  • “An Image Compositing Solution at Scale,” Kenneth Moreland, Wesley Kendall, Tom Peterka, and Jian Huang
  • “Topology-Aware Data Movement and Staging for I/O Acceleration on Blue Gene/P Supercomputing Systems,” Venkatram Vishwanath, Mark Hereld, Vitali Morozov, and Michael E. Papka
  • “A New Computational Paradigm in Multiscale Simulations: Application to Brain Blood Flow,” Leopold Grinberg, Vitali Morozov, Dimitry Fedosov, Joseph Insley, Michael Papka, Kalyan Kumaran, and George Karniadakis (Gordon Bell Prize Finalist)
  • “On the Duality of Data-Intensive File System Design: Reconciling HDFS and PVFS,” Wittawat Tantisiriroj, Swapnil Patil, Garth Gibson, Seung Son, Samuel Lang, and Robert Ross
  • “Scalable Stochastic Optimization of Complex Energy Systems,” Miles Lubin, Cosmin G. Petra, Mihai Anitescu, and Victor Zavala
  • “A Distributed Look-up Architecture for Text Mining Applications Using MapReduce,” Atilla S. Balkir, Ian Foster, Andrey Rzhetsky
  • “Server-Side I/O Coordination for Parallel File Systems,” Huaiming Song, Yanlong Yin, Xian-He Sun, Rajeev Thakur, and Samuel Lang
  • “GROPHEY: GPU Performance Projection from CPU Code Skeletons,” Jiayuan Meng, Vitali Morozov, Kalyan Kumaran, Venkatram Vishwanath, and Thomas Uram
  • “Optimizing the Barnes-Hut Algorithm in UPC,” B. Karmmer, J. Zhang, B. Behzad, and Marc Snir
 
Berkeley Lab Staff Contribute Expertise to SC11 Technical Program Sessions
Once again, scientists and engineers from Lawrence Berkeley National Laboratory (Berkeley Lab) are making significant contributions to the SC11 Technical Program, sharing their expertise and experience with thousands of attendees at the annual conference sponsored by the IEEE Computer Society and ACM SIGARCH. SC11 was held Nov. 12–18 in Seattle. The SC11 Technical Papers program received 352 high quality submissions, from which 74 were accepted—and 11 were authored or co-authored by Berkeley Lab staff. They are:
 
Berkeley Lab staff also presented their expertise in five SC11 tutorials, seven Birds of a Feather (BOF) sessions and 10 workshops held in conjunction with the conference. At the Second International Workshop on Data Intensive Computing in the Clouds (DataCloud-SC11) 2011External link, the Best Paper Award went to “I/O Performance of Virtualized Cloud EnvironmentsExternal link,” co-authored by Devarshi Ghoshal (Ph.D. student at Indiana University who did an internship on the Magellan project at NERSC), Shane Canon (NERSC), and Lavanya Ramakrishnan (Computational Research Division).
 
ORNL Maintains Major Presence at Annual Supercomputing Conference
Oak Ridge National Laboratory (ORNL) was once again a major player at high-performance computing’s (HPC’s) premier conference, SC11, which took place in Seattle, WA November 12–18. Aside from ORNL’s booth, which hosted a series of talks from experts and researchers representing a range of industry and computational science, the laboratory lent its expertise to numerous other areas throughout the event.
 
For example, staff members from the OLCF conducted several Birds-of-a-Feather (BoF) sessions for conference attendees. These open engagement discussions have been a staple of the SC conference series for years and are a vital part of the gathering’s educational component. ORNL has consistently contributed and 2011 was no exception, as OLCF staff anchored three BoF sessions over the course of the conference.
 
But it wasn’t only ORNL’s people that shined at SC11. Jaguar, the laboratory’s and DOE’s premier supercomputer, also received some attention in the HPC Challenge Awards. Jaguar took first runner-up in two of the competition’s four benchmarks, known as High-Performance Linpack (HPL) and STREAM. HPL measures speed by solving a dense linear system of equations, while STREAM measures the memory bandwidth and corresponding computational rate for a simple vector kernel. In addition, Jaguar took second runner-up in a benchmark known as Global FFT, or Fast Fourier Transform, which evaluates a system’s ability to transform one function into another. Also on the systems front, Jaguar ranked number three on the latest TOP500 list, a biannual ranking of the world’s fastest computing systems. The latest list was released at SC11 on November 14.
 
Besides a major presence in SC’s popular BoF sessions and impressive machine achievements, ORNL featured a number of people on SC11’s conference planning committee. ORNL’s James Rogers served as the technical program executive director, and the laboratory’s Becky Verastegui served as conference vice-chair. In all, ORNL had representatives in 22 areas of various committees, including Applications, Tutorials, Broader Engagement, Birds-of-a-Feather, and Posters, to name a few. (For a complete list of ORNL’s participating committee members, see the following link:
 
PNNL Contributes to Tech Program at SC11
At the SC11 conference, researchers at Pacific Northwest National Laboratory (PNNL) contributed two technical papers, participated in five workshops, and organized four Birds-of-a-Feather sessions. The two papers are:
  • “An Early Performance Analysis of POWER7-IH HPC Systems,” Kevin Barker, Adolfy Hoisie and Darren Kerbyson, all of Pacific Northwest National Laboratory.
  • “Scalable Implementations of Accurate Excited-State Coupled Cluster Theories: Application of High-Level Methods to Porphyrin-Based Systems,” Karol Kowalski, Sriram Krishnamoorthy and Eduardo Apra, all of PNNL; Ryan Olson, Cray, Inc.; and Vinod Tipparaju, Oak Ridge National Laboratory.
 
Berkeley Lab Staff Preview ESnet’s 100 Gbps Capability at SC11 Conference
The exhibition at the SC11 conference in Seattle got off to a gala start with the annual exhibition opening party on Monday night, Nov. 14. Berkeley Lab staff used the occasion to showcase the scientific capabilities of the new 100 gigabit-per-second prototype network created by the Energy Sciences Network (ESnet) under DOE’s Advanced Networking Initiative. The event marked the first use of the prototype to transmit scientific data from the National Energy Research Scientific Computing Center (NERSC) in Oakland.
 
The demo, which included side-by-side presentations of a 5 terabyte dataset streamed from NERSC at 100 Gbps and 10 Gbps, was a team effort from all of Berkeley Lab Computing Sciences. From the Computational Reasearch Division, the visualization drew on the talents of Yushu Yao, Prabhat, Burlen Loring, Hank Childs, Mark Howison, Wes Bethel, John Shalf and Aaron Thomas. ESnet contributors include Brian Tierney, Eric Pouyoul, Patrick Dorn, Evangelos Chaniotakis, John Christman, Chin Guok, Chris Tracy and Lauren Rotman. From NERSC, Jason Lee, Shane Canon, Tina Declerck and Cary Whitney provided critical support.
 
The demo showed how the universe has changed from a nearly homogeneous universe or 13 billion years ago to today’s universe, which is rich in structures that include galaxies, clusters of gravitationally bound galaxies, galaxy superstructures called walls that span hundreds of millions of light-years, and the relatively empty spaces between superstructures, called voids. The simulation was created using the Nyx code on 4,096 cores of NERSC’s Hopper, a Cray XE6 system. Read more about the demoExternal link
10g
 
100g
The demonstration area was packed with invited guests and conference attendees who also toasted ESnet's 25th anniversary of networking leadership.
 
Blue Gene/Q Prototype Wins Graph500 at SC11
The Blue Gene/Q Prototype II received the No. 1 ranking on the latest Graph500 list announced on November 15 at SC11 in Seattle. The submission for the winning ranking was a joint effort between IBM, Argonne National Laboratory, and Lawrence Livermore National Laboratory. The Graph500 list ranks supercomputers based on their performance on data-intensive applications and thus complements the TOP500 list, which is based on the LINPACK benchmark. Argonne’s Blue Gene/P, Intrepid, received an impressive fifth-place ranking on the Graph500 list.
 
Traditional benchmarks and performance metrics fail to provide useful information on the suitability of supercomputing systems for data-intensive applications. Backed by a steering committee of more than 30 international HPC experts from academia, industry, and national laboratories, Graph 500 established a new set of large-scale benchmarks for these applications. The new benchmarks will guide the design of hardware architectures and software systems intended to support such applications and help procurements.
 
For more information, contact Kalyan Kumaran, or visit the Graph 500 website:
 
Former Argonne Intern Wins SC11 Best Student Paper Award
Former Argonne intern Wesley Kendall has been named winner of the SC11 conference’s Best Student Paper Award. The winning paper, titled “Simplified Parallel Domain Traversal,” was written jointly with Jingyuan Wang, Melissa Allen, Tom Peterka, Jian Huang, and David Erickson. Coauthor Tom Peterka, an assistant computer scientist in Argonne’s Mathematics and Computer Science (MCS) Division, has worked with Kendall for several years. Their association began through the DOE SciDAC-2 Institute for Ultrascale Visualization; subsequently, during the summers of 2008 and 2009, Kendall joined the MCS Division as an Argonne intern. The two have continued their close collaboration, coauthoring ten papers, and Peterka is currently a member of Kendall’s dissertation committee at the University of Tennessee, Knoxville.
 

RESEARCH NEWS:

Climate Forecast: Today’s Severe Drought, Tomorrow’s Normal
While the worst drought since the Dust Bowl of the 1930s grips Oklahoma and Texas, scientists are warning that what we consider severe drought conditions in North America today may be normal for the continent by the mid-21st century, due to a warming planet.
 
A team of scientists from Berkeley Lab, Lawrence Livermore National Laboratory and the National Oceanic and Atmospheric Administration (NOAA) came to this conclusion after analyzing 19 different state-of-the-art climate models. Looking at the balance between precipitation and evapotranspiration - the movement of water from soil to air - they found that no matter how rainfall patterns change over the next 100 years, a warming planet leads to drought. Their results were published in the December 2011 issue of American Meteorological Society’s Journal of Hydrometerology.
 
ExM Advances System Support for Extreme-Scale, Many-Task Applications
The ASCR X-Stack project “ExM”External link explores an approach to exascale programming that blends the emerging “many-task computing” (MTC) modelExternal link with two paradigms whose roots go back over 30 years—functional programming and data-flow architecture. ExM is developing an implicitly parallel programming model that may be well suited for the upper-level logic of many prospective exascale applications, ranging from climate model analysis to molecular biology to uncertainty quantification and extreme-scale ensemble studies.
 
Recent results from ExM were described in a paper at the WORKS2011 Workshop of SC11External link that highlighted “AME”—a new, many-task engine with dispatching performance that scales linearly up to 14,120 tasks/second and which efficiently utilizes 16,384 cores for a synthetic workload with task durations shorter than two seconds. For the Montage astronomy application, the engine eliminates 73% of the data transfer between compute nodes and a global filesystem.
 
Another recent paperExternal link at the Workshop on Parallel Programming Models and Systems Software for High-End Computing of ICPP2011 described ExM’s ability to run many parallel MPI tasks of very short duration. Work is in progress on a parallel data-flow-based script evaluator that can execute tasks in the range of 500K tasks/second.
ExM
Contact: Michael Wilde, wilde@mcs.anl.gov
 

FACILITIES/INFRASTRUCTURE:

ESnet’s Prototype Network Connects Three Labs at 100 Gbps
DOE is now supporting scientific research at unprecedented bandwidth speeds—at least 10 times faster than commercial Internet providers—with a new network that connects thousands of researchers using three of the world’s top supercomputing centers in California, Illinois and Tennessee. The new network was officially unveiled at the SC11 Conference in Seattle, Washington, where DOE researchers used the network for groundbreaking climate data transfers and astrophysics visualizations.
 
The project, known as the Advanced Networking Initiative (ANI), was funded with $62 million from the 2009 economic stimulus program and is intended for research use, but could lead to widespread commercial use of similar technology. The network now delivers data at 100 gigabits per second (Gbps), making it one of the fastest systems in the world. It is the first step in ESnet’s nationwide upgrade and will serve as a pilot for future deployment of 100 Gbps Ethernet in research and commercial networks. The initiative plans to accelerate by several years the commercialization of 100 Gbps networking technologies and uses new optical technology to reduce the number of routers used, as well as reducing associated equipment and maintenance costs.
 
Oak Ridge, Argonne to Provide 1.7 Billion Hours of Computing Time to 66 Projects via INCITE

On Nov. 15, the DOE Office of Science announced awards of nearly 1.7 billion processor hours to 60 high-impact research projects which will address scientific and engineering challenges of national and global importance. The awards were made under DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. By granting allocations on leadership-class computing systems at Argonne and Oak Ridge national laboratories to researchers from academia, industry, and government labs, the INCITE program contributes to significant advances in science. From climate to nuclear reactor safety to more efficient combustion processes, the program aims to accelerate scientific discoveries and technological innovations by awarding, on a competitive basis, time on supercomputers to researchers with large-scale, computationally intensive projects that address grand challenges in science and engineering.

ORNL’s Oak Ridge Leadership Computing Facility (OLCF) will allocate time to 35 individual projects for a total of 940 million hours. Home to the DOE’s fastest supercomputer, Jaguar, this will be the OLCF’s eighth consecutive year to host INCITE projects. Jaguar enables researchers to run the most complex simulations in their field while allowing them to reach ever faster times to solution. OLCF staff also assist researchers in maximizing their codes’ potential on Jaguar and work with them throughout their project, resulting in scientific breakthroughs year after year

Read the list of OLCF INCITE projectsExternal link

 

Argonne’s ALCF will allocate 732 million compute hours to 31 projects. Argonne’s current supercomputer, Intrepid, is a 40-rack IBM Blue Gene/P capable of a peak performance of 557 teraflops (557 trillion calculations per second). In 2012, the ALCF will become home to IBM’s next-generation Blue Gene, the Blue Gene/Q, named Mira. At 10 petaflops, Mira will be nearly 20 times faster than Intrepid.

 
Last modified: 3/18/2013 10:12:33 AM