2011

December

ASCR Monthly Computing News Report - December 2011



This monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia national labs.

In this issue:

Research News

People

Facilities/Infrastructure

Education and Outreach


Research News

Co-Designing Architectures and Algorithms for Exascale Combustion Simulations

In a five-year project recently announced by the Department of Energy, the Combustion Exascale Co-Design Center will combine the talents of combustion scientists, mathematicians, computer scientists, and hardware architects. This multidisciplinary team will work to simultaneously redesign each aspect of the combustion simulation process—from algorithms to programming models to hardware architecture—in order to create high fidelity combustion simulations that can run at the next level of supercomputing power, the exascale.

Among the leaders in the field are Jackie Chen of Sandia National Laboratories and John Bell of Lawrence Berkeley National Laboratory (Berkeley Lab), each attacking the problem from a different perspective. Chen, an engineer by training, has worked in Sandia’s Combustion Research Facility since 1989 with funding from the Department of Energy’s Office of Basic Energy Sciences. Bell, on the other hand, is an applied mathematician who has been developing algorithms to study combustion for the past 15 years with support from DOE’s Office of Advanced Scientific Computing Research (ASCR). Chen will lead the new project with Bell serving as deputy PI. Read moreExternal link.

Sandia Tackles “Curse of Dimensionality” in Uncertainty Quantification

ASCR researchers at Sandia National Laboratories and the University of Southern California have recently developed rigorous uncertainty quantification (UQ) methods for multiphysics coupled systems that mitigate the “curse-of-dimensionality” whereby the computational cost of UQ grows dramatically with the number of uncertain sources. By embedding the UQ calculation into the multiphysics coupling itself, they have developed techniques to adapt the uncertainty representation within each physics, and demonstrated three orders of magnitude reduction of computational cost on a representative thermal-neutronics problem.

For more information, contact Eric Phipps, etphipp@sandia.gov.

Simulation on Jaguar Predicts New Properties for Boron Nitride Nanoribbons

Alejandro Lopez-Bezanilla, a research associate from Oak Ridge National Laboratory (ORNL), is using the Jaguar supercomputer to study a proposed graphene substrate: the compound boron nitride. He is funded by the Petascale Initiatives program of the Department of Energy’s Office of Advanced Scientific Computing Research. “Boron nitride is a covalent material with atoms tightly bonded to each other, but it also presents a strong ionic behavior,” explains Lopez-Bezanilla, making it a great insulator and poor conductor. With the help of computer simulations run on Jaguar, ORNL’s petascale supercomputer, Lopez-Bezanilla took a closer look—a nanoscale look—at boron nitride’s properties.

Working with colleagues at ORNL, Lopez-Bezanilla simulated zigzagged-edged nanoribbons (zBNNRs) modified by attaching oxygen or sulfur atoms to boron or nitrogen atoms along the cut edges. Under these conditions, the insulating boron nitride becomes metallic. The team used two systems for the simulation. Jaguar ran the Vienna Ab-initio Simulation Package (VASP) and the Oak Ridge National Laboratory Institutional Cluster ran the SIESTA code.

The computational models showed that the edge shape, the choice of added elements—oxygen, sulfur, or hydrogen—and the location of the elements all play a role in creating different behaviors in the nanoribbons. Simulating the transformation of a known material into one with novel structure and function opens the door to possible new uses—optical, magnetic, electronic—and a world of opportunities. Contact: Jayson Hines, hinesjb@ornl.gov.

Closest-Ever Look at the Explosion of a Type Ia Supernova

Even as the “supernova of a generation” came into view in backyards across the northern hemisphere last August, physicists and astronomers who had caught its earliest moments were developing a surprising and much clearer picture of what happens during a titanic Type Ia explosion. Led by Peter Nugent, who heads the Computational Cosmology Center (C3) at Berkeley Lab, the team describes the closest, most detailed look ever at one of the universe’s brightest “standard candles,” the celestial mileposts that led to the discovery of dark energy, in the Dec. 15 issue of the journal Nature. Rollin Thomas of C3 is a co-author of the paper. Read moreExternal link.

Type Ia supernovae (SN Ia’s) are the extraordinarily bright and remarkably similar “standard candles” astronomers use to measure cosmic growth, a technique that in 1998 led to the discovery of dark energy—and 13 years later to a Nobel Prize, “for the discovery of the accelerating expansion of the universe.” The light from thousands of SN Ia’s has been studied, but until now their physics—how they detonate and what the star systems that produce them actually look like before they explode—has been educated guesswork.

On August 24 of this year, searching data as it poured into DOE’s National Energy Research Scientific Computing Center (NERSC) from an automated telescope on Mount Palomar in California, Nugent spotted a remarkable object. It was shortly confirmed as a Type Ia supernova in the Pinwheel Galaxy, some 21 million light-years distantExternal link. That’s unusually close by cosmic standards, and the nearest SN Ia since 1986; it was subsequently given the official name SN 2011fe.

PNNL’s Lee Develops New Method for Diffusion-Type Equations

Pacific Northwest National Laboratory (PNNL) scientist Barry Lee has developed a multigrid method and preconditioners for stochastic Galerkin finite element discretizations (SFEM) of diffusion-type equations. These methods are relevant for developing scalable methods for hybrid UQ methodologies for multi-physics applications that involve elliptic diffusion phenomenon. Parallel software has been developed to test the effectiveness of these methods (e.g., on 1,008 cores of Jaguar, for a spatial three-dimensional problem with eight million grid points and 792 degrees of freedom per spatial grid point, and standard deviation of 1.0 in the random diffusion coefficients, which is one of the largest SFEM calculations made, only 14 iterations were needed). A research article is currently being completed for submission to the SIAM Journal of Scientific Computing.

Supercomputers Take a Cue from Microwave Ovens

As sophisticated as modern climate models are, one critical component continues to elude their precision—clouds. Simulating these fluffy puffs of water vapor is so computationally complex that even today’s most powerful supercomputers, working at quadrillions of calculations per second, cannot accurately model them. Clouds modulate the climate, reflecting some sunlight back into space, which cools the Earth; but they can also act as a blanket and trap heat.

“Getting their effect on the climate system correct is critical to increasing confidence in projections of future climate change,” says Michael Wehner, a climate scientist at the Lawrence Berkeley National Laboratory (Berkeley Lab). In order to build the breakthrough supercomputers scientists like Wehner need, researchers are looking to the world of consumer electronics like microwave ovens, cameras and cell phones, where everything from chips to batteries to software is optimized to the device’s application. This co-design approach brings scientists and computer engineers into the supercomputer design process, so that systems are purpose-built for a scientific application, such as climate modeling, from the bottom up.

In a paper entitled “Hardware/Software Co-design of Global Cloud System Resolving ModelsExternal link,” published in the October 2011 Journal of Advances in Modeling Earth Systems, Wehner and coauthors argue that the scientific supercomputing community should take a cue from consumer electronics like smart phones and microwave ovens: Start with an application—like a climate model—and use that as a metric for successful hardware and software design.


People

NERSC’s Scott Campbell Wins Best Paper at LISA ’11

Scott Campbell of the Networking, Security and Servers Group at the National Energy Research Scientific Computing Center (NERSC) won the best paper award at USENIX LISA ’11: 25th Large Installation System Administration ConferenceExternal link for his work entitled “Local System Security via SSHD.” His paper describes a method for near-real-time identification of attack behavior and local security policy violations taking place over SSH. At NERSC, Campbell works on the Bro intrusion detection systems and incident response. The LISA ’11 conference was held in Boston, Mass., from December 4–9, 2011.

Berkeley’s David Patterson Writes Opinion Piece on Curing Cancer with Computing

David Patterson

David Patterson

David Patterson, a professor of Computer Science at UC Berkeley with a joint appointment in Berkeley Lab’s Computational Research Division, wrote a column in the Dec. 5 edition of the New York Times outlining how thousands of people, volunteering their personal computers, could speed the task of sequencing the genome of a tumor. Patterson suggests this approach in light of the recent discovering that cancer is a genetic disease, caused primarily by mutations in our DNA. As well as providing the molecular drivers of cancer, changes to the DNA also cause the diversity within a cancer tumor that makes it so hard to eradicate completely. Read Patterson’s columnExternal link.


Facilities/Infrastructure

Mobile App Hooks Up Users at the Oak Ridge Leadership Computing Facility

Experts at the Oak Ridge Leadership Computing Facility (OLCF) are working on many levels to create a better experience for the center’s high-performance computing (HPC) users. Their most recent effort is a smartphone application that shows the current status of the OLCF machines, available from the Google Android Market and the Apple App Store. The application opens a list of the OLCF computational resources: the Jaguar supercomputer, the Spider file system, the HPSS archive, the Lens visualization cluster, the DTN data transfer system, and the Frost and Smoky clusters. The app shows the current status and when the status last changed. A secondary screen for each system informs users of any scheduled maintenance or outages.

Adam Simpson and Bill Renaud, HPC user support specialists at OLCF, developed the application’s general features; Simpson built the platform for the Apple version and Renaud, the Android version. In the past, principal investigators and their team members were notified of machine status by email or by logging into the center’s website. Contact: Jayson Hines, hinesjb@ornl.gov.

Oak Ridge Computing Facility Donates Eugene System to Argonne

The OLCF cleared some room for next-generation machines on Tuesday, December 13, and simultaneously helped a partner. The center’s IBM Blue Gene/P system, dubbed Eugene, computed at a maximum of 27 trillion calculations per second from 2008 until it was decommissioned in October 2011. Once the decommissioning schedule for Eugene was set, Jim Rogers, director of operations at the center, approached his counterparts at the Argonne Leadership Computing Facility (ALCF) about whether they could reuse or redeploy some portion of that hardware. Argonne identified a possible use for the OLCF hardware that will allow that center to move a development partition from the main IBM Blue Gene/P Intrepid resource to a separate partition. Remaining equipment will also be redeployed in Argonne’s onsite spare hardware inventory, reducing maintenance and operating cost to the U.S. Department of Energy program.

Eugene, one of the early supercomputers at the OLCF, delivered roughly 45 million processor hours per year for OLCF researchers and collaborators. It had 2,048 quad-core PowerPC processors and 64 input/output (I/O) nodes. The OLCF primarily uses Cray machines for their supercomputers, while the ALCF uses IBM architectures for its premier supercomputers, including Intrepid, a Blue Gene/P model capable of 557 trillion calculations per second. The ALCF’s next-generation machine will be a Blue Gene/Q named Mira. Contact: Jayson Hines, hinesjb@ornl.gov.


Outreach and Education

ALCF Holds 2012 Winter Workshop

Whether you’re a new user at the Argonne Leadership Computing Facility (ALCF), a seasoned pro, or somewhere in between, the ALCF’s 2012 Winter Workshop offers benefits for users of all experience levels. The workshop will be held January 23–26 at Argonne National Laboratory, located near Chicago. Based on users’ feedback, this year’s workshop will feature agenda topics chosen by users.

The workshop begins Monday, January 23, with an ALCF primer to give beginners all the information needed to get started on ALCF resources. Then, days 2–4 will feature topics of interest to all users, including the user-directed portion of the workshop built from the input that users provide about their interests when they register. When registering, attendees will choose from a list of 35 topics, including:

  • Transitioning from Blue Gene/P to Blue Gene/Q
  • Effective Job Submission Strategies
  • Code Optimization Examples: What worked, what didn’t?
  • Introduction to OpenMP
  • Utilizing Unique Features of the Blue Gene

More information can be found on the 2012 Winter Workshop registration pageExternal link.

Last modified: 3/18/2013 10:12:33 AM