2011

June

ASCR Monthly Computing News Report - June 2011



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.
 
In this issue:
 
 
 
 

RESEARCH NEWS:

OLCF Researchers Named R&D 100 Winners
A team led by Oak Ridge National Laboratory's (ORNL's) Robert Harrison has been awarded a 2011 R&D 100 Award by R&D Magazine. The team's award stems from the development of the Multiresolution Adaptive Numerical Environment for Scientific Simulations, or MADNESS, submitted and developed by ORNL with a team led by Harrison and George Fann. The team also includes Rebecca Hartman-Baker of Oak Ridge Leadership Computing Facility's (OLCF's) Scientific Computing Group.
 
MADNESS is a free, open-source, general purpose, user-friendly software package for the development of scientific simulations from laptops to massively parallel supercomputers. MADNESS utilizes the latest parallel computing and solution methodologies to solve many dimensional integral and differential equations accurately and precisely for real-world problems. MADNESS provides a new platform for scientists and engineers to easily create new applications with assurance in the exactness of their results. This is Harrison's second R&D 100 Award. As the principal architect of NWChem, a chemistry code that was included in the software package MS3, Harrison was part of an R&D 100 team in 1999.
Contact: Jayson Hines, ( hinesjb@ornl.gov)
 
Jaguar Helps Pinpoint How Copper Folds Protein into Precursors of Parkinson’s Plaques
Aided by the Jaguar supercomputer at ORNL, researchers at North Carolina State University have figured out how copper induces misfolding in the protein associated with Parkinson’s disease, leading to creation of the fibrillar plaques which characterize the disease. This finding has implications for both the study of Parkinson’s progression, as well as for future treatments. The protein in question, alpha-synuclein, is the major component of fibrillar plaques found in Parkinson’s patients. Researchers had already discovered that certain metals, including copper, could increase the rate of misfolding by binding with the protein, but were unsure of the mechanism by which this binding took place.
 
Rose and North Carolina State colleagues Miroslav Hodak, research assistant professor of physics, and Jerzy Bernholc, Drexel Professor of Physics and Director of the Center for High Performance Simulation, developed a series of computer simulations designed to ferret out the most likely binding scenario. According to Hodak, “We simulated the interactions of hundreds of thousands of atoms, which required multiple hundred thousand CPU-hour runs to study the onset of misfolding and the dynamics of the partially misfolded structures.”
 
The number of calculations was so large that Hodak and Bernholc had to devise a new method to make it possible for a computer to process them. Only supercomputers like Jaguar—the most powerful in the United States, in fact—are up to the task. But the simulations finally revealed the binding configuration most likely to result in misfolding. Their results appear in the June 14 edition of Nature Scientific Reports.
Contact: Jayson Hines, ( hinesjb@ornl.gov)
 
PNNL Researchers, Collaborators Develop New Power Grid Estimation Method
The national power grid currently poses a fundamental challenge for real-time, on-line estimation of the grid state space, as the computation typically involves tens of thousands of state elements and thousands of regularly occurring measurements. The most straightforward approach involves estimation of the entire state space and using all measurements. This is so computationally challenging that it would typically take several minutes of computation per update, even on today’s high-end DOE supercomputers, which is much too long for the fast dynamics in the power grid.
 
To address this problem, researchers at the University of North Carolina at Chapel Hill and DOE’s Pacific Northwest National Laboratory (PNNL) in Richland, Washington, have developed a new method for dynamically choosing a provably optimal subset of measurements—the measurements that will provide the most needed and highest quality information about the state of the grid at any moment. By using such subsets of measurements, estimation should be able to proceed at a much higher rate, thus reducing latency in the estimates and increasing situational awareness. Researchers liken the approach to fighting the multi-headed beast Lernaean Hydra in Greek mythology: if one cannot destroy all of the “heads” at once, one should at least cut off the most threatening heads at each moment. The researchers presented their work at the IEEE Trondheim PowerTech 2011 conferenceExternal link in June.
Contact: Greg Welch, (welch@cs.unc.edu) and Zhenyu Huang, (zhenyu.huang@pnnl.gov)
 
Voyagers, Computer Models Find Surprising Magnetic Froth at Solar System’s Edge
NASA’s Voyager probes have reached the end of our solar system, where they’ve found neither giants nor dragons, but something nearly as surprising—a turbulent froth of magnetic bubbles. Using new computer models to analyze Voyager data, university scientists computing at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (Berkeley Lab) have found that the sun’s distant magnetic field is made up of bubbles about 100 million miles wide.
 
The bubbles are created when magnetic field lines reorganize, a process known as magnetic reconnection. This new model suggests the bubbles are self-contained structures disconnected from the solar magnetic field and may help scientists explain how some very strong cosmic rays make it to Earth. The findings are described in the June 9 edition of the Astrophysical JournalExternal link.
 
Researchers Use the Cloud to Shed Light on Longstanding Proton Spin Mystery
It’s been nearly 25 years since the European Muon Collaboration made a startling discovery: only a portion of a proton’s spin comes from the quarks that make up the proton. The revelation was a bit of a shock for physicists who had believed that the spin of a proton could be calculated simply by adding the spin states of the three constituent quarks. This is often described as the “proton spin crisis.”
 
“At that time people realized protons are not just a sum of three quarks stuck together like Lego-blocks,” said Jan Balewski, an MIT-based member of the Solenoidal Tracker At RHIC (STAR) experiment at Brookhaven National Laboratory. “Protons are dynamic systems of interacting constituent quarks, gluons, and sea quarks.” At Brookhaven, three concurrent Relativistic Heavy Ion Collider experiments are looking to explain why is it that the quarks spins’ contributions to the proton spin is only a small fraction of what was expected.
 
Analyzing the experimental data would require computing resources beyond what is currently available onsite, so the researchers turned to DOE’s Magellan Cloud Computing program to see if the cloud could help. Magellan consists of two cloud computing testbeds: one at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California and another at Argonne National Laboratory near Chicago, Illinois. The result of the team’s efforts was a real-time cloud-based data processing system that functions as a self-adjusting assembly line and handles variable throughput. No human intervention is needed, and there is no supervisor process that orchestrates the entire data flow. Every stage of the process is governed by local rules designed to handle time-outs and refusals from other elements by waiting a few minutes and then starting over.
 
PNNL Scientists Develop Scalable Computational Kernel Algorithm for UQ Analysis
Researchers at Pacific Northwest National Laboratory have developed and analyzed a multiple/semi-coarsening multigrid method for solving the mono-energetic and multi-energetic Boltzmann equations. This method has demonstrated algorithmic scalability over the physical parameter cross-section regimes, and will be one of the core computational kernels in an uncertainty quantification analysis of the Boltzmann transport equation in neutron transport problems. A research article is currently going through peer review in the journal Numerical Linear Algebra with Applications, and another article is being submitted to the SIAM Journal of Scientific Computing.
Contact: Barry Lee, (barry.lee@pnnl.gov)

PEOPLE:

Jack Wells Named New Director of Science at the OLCF
Jack Wells is the new Director of Science for the Oak Ridge Leadership Computing Facility. In his new position, Wells will be responsible for an OLCF scientific strategy to facilitate the scientific missions of the Department of Energy. Wells began his career at Oak Ridge in 1997 as a Wigner postdoctoral fellow. He worked in Oak Ridge National Laboratory’s (ORNL’s) Computing and Computational Sciences Directorate as group leader of both the Computational Materials Sciences group in the Computer Science and Mathematics Division and the Nanomaterials Theory Institute in the Center for Nanophase Materials Sciences. Wells also served for two years as Director of Institutional Planning within the Laboratory Director’s Office. During an off-site assignment from 2006 to 2008, Wells served as a legislative staffer in the Washington, D.C., office of Senator Lamar Alexander.
 
Wells’s new role will build on an established relationship with the OLCF. As the principal investigator for an INCITE (Innovative and Novel Computational Impact on Theory and Experiment) project investigating lithium/air batteries, Wells has used OLCF leadership computing systems to tackle complex energy storage issues. Wells assumes his responsibilities from Bronson Messer, who has served as the OLCF acting director of science since Doug Kothe left to direct the new Consortium for Advanced Simulation of Light Water Reactors.
Contact: Jayson Hines, ( hinesjb@ornl.gov)
 
PNNL Scientist Tartakovsky Wins Early Career Program Award
Alexandre Tartakovsky, a computational mathematician at Pacific Northwest National Laboratory, was selected to receive an Early Career Research Program award from the DOE Office of Science. The program provides funding for projects based on proposals submitted by researchers. Tartakovsky’s proposal, “New Dimension Reduction Methods and Scalable Algorithms for Multi-Scale Nonlinear Phenomena,” will be funded at a total of $2.5 million over five years. The project will focus on developing new multiscale methods to improve the accuracy and efficiency of computational models in a wide variety of scientific applications. Tartakovsky is also the principal investigator on multiple projects funded by ASCR.
Contact: Alex Tartakovsky (alexandre.tartakovsky@pnnl.gov)
 
Berkeley’s James Sethian Is Named Einstein Visiting Fellow in Berlin
James Sethian, Professor of Mathematics at UC Berkeley and head of Berkeley Lab’s Mathematics Group, is one of two mathematicians recently named Einstein Visiting Fellow to the Berlin Mathematical School. An Einstein Visiting Fellow is not a conventional visiting scientist who attends a university or research institution in Berlin for just one semester. The fellows, funded by the Einstein Foundation Berlin, are asked to become long-term partners with the science and research community in Berlin.
 
LBNL’s David Leinweber Named One of Decade’s Top 10 Innovators in Trading Industry
David Leinweber of Berkeley Lab’s Computational Research Division has been named by Advanced Trading magazine as one of its “Top 10 Innovators of the Decade” for his work in developing a service that allows trading strategies to react to news the instant it breaks, managing what the magazine describes as “a fire hose of aggregated updates.” Last year, Leinweber joined Berkeley Lab from UC Berkeley and established the Center for Innovative Financial Technology (CIFT) to help build a bridge between the computational science and financial markets communities. Leinweber, who has a Ph.D. in applied mathematics from Harvard University, may be best known as the author of Nerds on Wall Street: Math Machines and Wired Markets, published in 2009.
 
LBNL’s Juan Meza Named New Dean of Natural Sciences at UC Merced
Juan Meza, acting director of Berkeley Lab’s Computational Research Division (CRD) and head of CRD’s High Performance Computing Research Department, has been named Dean of the School of Natural Sciences at the University of California, Merced.External link The appointment, announced Friday, June 3, will be effective this fall.
 
“As I was considering this position, I was truly impressed by the quality of the faculty, the commitment and diversity of UC Merced’s students and the great potential to help shape the development of the newest campus in one of the world’s best public university systems,” Meza said. “I’m looking forward to tapping into this great potential to help develop each student as an individual and the institution as a whole.” Read more.External link

 

FACILITIES/INFRASTRUCTURE:

Blue Gene/Q Prototype Tops Green500 List of Energy-Efficient Supercomputers
The Green500 has just released its latest rankings of the most energy-efficient supercomputers in the world. Topping the list is the IBM Blue Gene®/Q—a prototype of the production machine the Argonne Leadership Computing Facility is scheduled to deploy in 2012. The list shows that six of the world’s top 10 most energy-efficient supercomputers are built on IBM high-performance computing technology. The list includes supercomputers from the United States, China and Germany that are being used for a variety of applications such as astronomy, climate prediction and life sciences.
 
Energy-efficient supercomputers offer critical cost savings by lowering power consumption, reducing expenses associated with cooling, and scaling to larger systems while maintaining an acceptable power consumption bill. For example, for every $1 spent on electricity with the largest petascale system on the Green500 list, less than $0.40 would be spent on a system based on the IBM Blue Gene/Q, making it 2.5 times more energy efficient.
 
More information about the Green500 list is available at the following link:
 
Berkeley Lab’s Shane Canon Debunks Misconceptions at ScienceCloud2011
In a June 29 article titled “Science Clouds 2011 debunks cloud myths and more,”External link International Science Grid This Week spotlighted NERSC Technology Integration Group Lead Shane Canon’s presentation to the ScienceCloud2011External link workshop held on June 8 in San Jose. Co-authors include Lavanya Ramakrishnan, Iwona Sakrejda, Tina Declerck, Keith Jackson, Nick Wright, John Shalf, and Krishna Muriki.
 
Titled “Debunking Some Common Misconceptions of Science in the Cloud,”External link the presentation addressed five misconceptions:
    Clouds are simple to use and don’t require system administrators.
  • My job will run immediately in the cloud.
  • Clouds are more efficient.
  • Clouds allow you to ride Moore’s Law without additional investment.
  • Commercial Clouds are much cheaper than operating your own system.
 
ALCF’s Intrepid Ranked No. 1 on Graph 500 List for Two Years Running
Intrepid, the IBM Blue Gene/P supercomputer at the Argonne Leadership Computing Facility (ALCF), was ranked No. 1 on the Graph 500 listExternal link announced at the 2011 International Supercomputing Conference held June 19-23 in Hamburg, Germany. This is the second consecutive time that Intrepid achieved the top distinction, placing Argonne at the forefront of large-scale data analytics. The list ranks supercomputers based on their performance on data-intensive applications and thus complements the TOP500 list that is based on the LINPACK benchmarkExternal link.
 
Traditional benchmarks and performance metrics do not provide useful information on the suitability of supercomputing systems for data-intensive applications. Backed by a steering committee of more than 30 international HPC experts from academia, industry, and national laboratories, Graph 500 establishes a new set of large-scale benchmarks for these applications.
 
Led by Kalyan Kumaran, the ALCF, jointly working with an Indiana University team led by Jeremiah Willcock and an IBM team coordinated by Fabrizio Petrini, submitted the No. 1 result on 131,072 cores of Blue Gene/P—four times larger than the previous result on 32,768 cores submitted at SC10 in November 2010—using a graph that likewise was four times larger. To date, the graph is the largest on record ever analyzed on a parallel machine of any kind.
Contact: Kalyan Kumaran (kumaran@alcf.anl.gov)
 
Jaguar Users Receive HPC Innovation Excellence Awards
Mike Henderson, CEO of BMI Corporation and Smart Truck, was announced as one of nine winners of the HPC Innovation Excellence Awards, given to organizations achieving an important, quantifiable achievement with the help of high-performance computing. The OLCF’s Cray XT5 “Jaguar” supercomputer allowed Henderson and colleagues at BMI to design new add-on parts for long haul trucks that dramatically decrease drag and increase fuel-efficiency, resulting in an estimated fuel savings of $5,000 per truck, per year. A key part of the process was creating the most complex model of a trailer to date and studying the airflow around it using NASA’s Full Unstructured Navier Stokes (FUN3D) application. FUN3D is a suite of computational fluid dynamics codes for analysis, adjoint-based error estimation, mesh adaption, and design optimization of fluid dynamic simulations.
 
Beyond substantial reductions in run times, access to Jaguar helped deliver an even bigger win for Henderson and his team. They bypassed traditional physical prototyping and moved from concept to production-ready designs in 18 months instead of the three years they initially estimated it would take on their small in-house cluster.
 
NERSC Honored for HPC Innovations Excellence
The Department of Energy’s NERSC has been honored with an HPC Innovation Excellence Award for providing supercomputing, storage, and service support to the 20th Century Reanalysis Project—a collaboration of the University of Colorado, National Oceanic and Atmospheric Administration (NOAA), the Atmospheric Circulation Reconstructions over the Earth Initiative, and 30 international organizations. Led by University of Colorado climatologists Gilbert Compo and Prashant Sardeshmukh and NOAA meteorologist Jeffrey Whitaker, the project uses supercomputers to reconstruct global historical weather maps from 1871 to the present day. This dataset helps the science community put current weather extremes in a historical perspective, determine how extremes are changing, and validate computer climate models. Some of this research was conducted using a 3 million-hour allocation on Jaguar at OLCF.
 
title
ESnet Celebrates World IPv6 Day
On June 8, 2011, content providers, universities, ISPs, and other network organizations engaged in activities related to World IPv6 Day. This massive exercise was akin to a “test drive” where content was shared and networks were configured to support Internet Protocol version 6 (IPv6), which is already supplanting the current Internet Protocol version 4 (IPv4). As a network pioneer, ESnet has made most of its public content available over IPv6 for several years. However, ESnet still took a number of actions to further engage the community on World IPv6 Day. Read moreExternal link. ESnet network engineer Michael Sinatra answers questions about IPv6 on the DOE Energy Blog.
 
ESnet’s OSCARS Honored in UC Larry L. Sautter Award Program
The University of California Information Technology Leadership Council has announced that ESnet’s On-Demand Secure Circuits and Reservation System (OSCARS) was selected for honorable mention in the 2011 Larry L. Sautter Award Program. The Sautter AwardExternal link was established in 2000 to encourage and recognize innovative deployment of information technology in support of the University’s mission. The Sautter Award honors projects developed by faculty and staff in any department at the ten UC campuses, the UC Office of the President (UCOP), and Lawrence Berkeley National Laboratory. The awards will be presented at the UC Computing Services Conference at UC Merced, August 8, 2011.
 

OUTREACH & EDUCATION:

Berkeley Lab Organizes Prestigious 18th Householder Symposium
Berkeley Lab Computing Sciences hosted the 18th Householder SymposiumExternal link at the Granlibakken Conference Center at Lake Tahoe on June 12–17. The symposium, which is considered one of the most important meetings on numerical linear algebra, was organized this year by Esmond Ng of the Computational Research Division’s Scientific Computing Group (SCG).
 
Jim Demmel, a professor at UC Berkeley and a faculty scientist in SCG, was one of plenary speakers, with the topic “Avoiding Communication in Numerical Linear Algebra.” Other invited participants from SCG were:
    Zhaojun Bai, “Progress in Linear and Nonlinear Eigensolvers” (poster)
  • Ming Gu, “Reduced Rank Regression via Convex Optimization” (poster)
  • Sherry Li, “Towards an Optimal Parallel Approximate Sparse Factorization Algorithm Using Hierarchically Semi-Separable Structures” (poster)
  • Esmond Ng, “A Combinatorial Problem in Sparse Orthogonal Factorization”
  • Ichitaro Yamazaki, “A Parallel Hybrid Linear Solver for Large-Scale Highly Indefinite Linear Systems of Equations” (poster)
  • Chao Yang, “Solving Nonlinear Eigenvalue Problems in Electronic Structure Calculation” (poster)
 
Crash Course at OLCF Trains Next Generation of Supercomputing Researchers
The OLCF recently held a “Crash Course in Supercomputing” for approximately 60 summer interns to provide an overview of concepts and techniques in high-performance computing. The June 16 workshop was divided into two sections. The first section discussed the UNIX operating system on which OLCF resources run, while the second section covered concepts in parallel programming. Students used laptops at the workshop to practice programming methods.
 
Section one of the course also included discussions on the vi editor (a screen-based text editing program used by many Unix users), compiling and making files, and the commonly used C programming language. After introducing parallel programming, section two familiarized students with the message passing interface (MPI), batch scripts, OpenMP, and debugging. This year’s “Crash Course in Supercomputing” is the fourth hosted by OLCF.
Contact: Vitali Morozov, morozov@anl.gov
 
Berkeley Lab Staff Give Keynotes, Talks at Federated Computing Research Conference
Once every three or four years, the Association for Computing Machinery (ACM) sponsors the Federated Computing Research Conference (FCRC), an assemblage of 16 affiliated research conferences and workshops, at a weeklong coordinated meeting held at a common time and place. FCRC’11External link was held June 4–11 at the San Jose Convention Center, and Berkeley Lab Computing Sciences staff gave a number of keynote presentations and talks. Here is a look at some of the invited presentations:
  • John Shalf of NERSC’s Advanced Technologies Group gave the opening keynote talk at the EMEA (Emerging Applications and Many-Core Architecture) workshop on Saturday, June 4. Shalf’s presentation was on “Manycore ASICs for Energy Efficient Scientific Computing: From Teraflop Toaster Ovens to Exascale Computing.”
  • UC Berkeley Professor Dave Patterson, who holds a joint appointment in the Lab’s Computational Research Division (CRD), gave a keynote address on “Emerging Applications from the UC Berkeley Par Lab,” also on June 4.
  • NERSC Director Kathy Yelick gave the June 5 keynote address at the 1st International Workshop on Adaptive Self-Tuning Computing Systems for the Exaflop Era (EXADAPT 2011). Yelick’s talk was entitled “Autotuning in the Exascale Era!”
  • Jim Demmel, a professor of computer science at UC Berkeley with a joint appointment in CRD’s Scientific Computing Group, gave an invited talk on “Accurate and Efficient Expression Evaluation and Linear Algebra, or Why It Can be Easier to Compute Accurate Eigenvalues of a Vandermonde Matrix than the Accurate Sum of 3 Numbers” on June 8 at SNC 2011, the Symbolic Numeric Computation Workshop.
  • Yelick delivered the keynote talk on “Exascale Opportunities and Challenges” at the 20th International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC) on June 9.

 

 

 

 

 

 

Last modified: 3/18/2013 10:12:31 AM