2011

August

ASCR Monthly Computing News Report - August 2011



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.
 
In this issue:
 
 
 
 

RESEARCH NEWS:

NERSC Computers and Networks Catch a Young Supernova
A supernova discovered Aug. 23 by Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Peter Nugent is closer to Earth - approximately 21 million light-years away - than any other of its kind in a generation. Astronomers believe they caught the supernova within hours of its explosion, a rare feat made possible by a specialized survey telescope and state-of-the-art computational tools. Astronomers say this event is an “instant cosmic classic.” The discovery of such a supernova so early and so close has energized the astronomical community as they are scrambling to observe it with as many telescopes as possible, including the Hubble Space Telescope.
 
The supernova, dubbed PTF 11kly, occurred in the Pinwheel Galaxy, located in the “Big Dipper,” otherwise known as the Ursa Major constellation. It was discovered by the Palomar Transient Factory (PTF) survey, which is designed to observe and uncover astronomical events as they happen.
 
The PTF survey uses a robotic telescope mounted on the 48-inch Samuel Oschin Telescope at Palomar Observatory in Southern California to scan the sky nightly. As soon as the observations are taken, the data travels more than 400 miles to NERSC at Berkeley Lab via the National Science Foundation’s High Performance Wireless Research and Education Network and DOE’s Energy Sciences Network (ESnet). At NERSC, computers running machine learning algorithms in the Real-Time Transient Detection Pipeline scan through the data and identify events to follow up on. Within hours of identifying PTF 11kly, this automated system sent the coordinates to telescopes around the world for follow-up observations.
Supernova
These images show Type Ia supernova PTF 11kly, the youngest ever detected - over three nights. The left image, taken on August 22, shows the event before it exploded supernova, approximately 1 million times fainter than the human eye can detect. The center image, taken on August 23, shows the supernova at about 10,000 times fainter than the human eye can detect. The right image, taken on August 24, shows that the event is six times brighter than the previous day. Two weeks later it was visible with a good pair of binoculars.
 
Project on Multiscale Blood Flow Models Named ACM Gordon Bell Prize Finalist
A research endeavor focused on multiscale brain blood flow simulations led by George Karniadakis from Brown University has been named an ACM Gordon Bell Prize finalist - one of five finalists selected this year. Administered by the Association of Computing Machinery (ACM), the prize is awarded annually to recognize outstanding achievement in high performance computing. Karniadakis and his research team from Brown and Argonne National Laboratory - Leopold Grinberg, Vitali Morozov, Dmitry Fedosov, Joseph Insley, Michael Papka, and Kalyan Kumaran - will present their research results in a technical session during the SC11 conference on Wednesday, Nov. 16.
 
To treat diseases involving disruptions of blood flow to the brain, doctors must first understand how multiple scales of blood vessel networks work within the brain, both alone and together. Currently, the researchers are using the supercomputing resources at the Argonne Leadership Computing Facility (ALCF) to conduct an Innovative and Novel Computational Impact on Theory and Experiment (INCITE) project focused on creating multiscale models that show the interconnected workings of multiple scales of the brain’s blood vessels. The research is an extension of an earlier ALCF project, in which the runs were conducted for Karniadakis’s Gordon Bell submission. Multiscale models provide doctors with a more realistic picture of blood flow, and the greatest hope for the development of new lifesaving treatments.
 
Karniadakis’s goal is to enhance the ability to chart the behavior of individual red blood cells and improve predictive capabilities in medical procedures. The team is also developing software and algorithms pertaining to blood flow for petascale supercomputers.
 
Main research accomplishments to date are the simulation of initial stages of clot formation, simulation of blood flow, and healthy and diseased red blood cells (RBCs) at different stages of cerebral malaria and sickle cell anemia; modeling of the glycocalyx layer, which plays an important role in protecting the arterial wall; and modeling of the microcirculation and distribution of RBCs in Y-shaped bifurcating arteries.
Blood Flow Blood flow in the brain is a multiscale problem. Left image: Macrodomain where the Navier-Stokes equations are solved; inset (right) is the microdomain where dissipative particle dynamics is applied. Of interest is the deposition of platelets to the aneurysmal wall. Right image: Platelets aggregation on the wall of aneurysm at initial (A) and advanced (B) stages. Yellow particles are active platelets, and red particles are inactive platelets. Streamlines depict instantaneous velocity field.
Contact: George Karniadakis, george_karniadakis@brown.edu
 
PNNL Scientists Find Small Particles Have Big Impact on Climate
Using computing systems at the National Energy Research Scientific Computing Center, scientists from Pacific Northwest National Laboratory (PNNL) have found that the small effects of particles suspended in our atmosphere add up over time and can lead to big errors in climate prediction models. Known as aerosols, particles such as ozone, dust, and sea salt both scatter and absorb sunlight in different proportions depending on their type and elevation.
 
“Aerosols like ozone, dust and sea salt in our atmosphere scatter and absorb sunlight. Depending on the type of particle and its elevation above Earth’s surface, these particles can tip the energy balance toward heating or cooling,” says Dr. William Gustafson, an atmospheric scientist at PNNL and principal investigator of the study, published in the July 2011 Journal of Geophysical Research AtmospheresExternal link.
Aerosol Pollution
 
 
High-resolution simulation for Mexico City (top) shows a more detailed and accurate picture of aerosol pollution compared to representations of a global climate model (bottom). The deep red to light green colors represent concentrations of aerosol pollution with red being highest, light green lowest.
 
Jaguar Supercomputer Used to Help Model Hurricane Structure and Intensity
Information from major hurricanes of the last two decades (such as Katrina) is being put to good use by scientists striving to understand how hurricanes intensify. A research team led by Jon Reisner of Los Alamos National Laboratory (LANL) is using the Oak Ridge Leadership Computing Facility’s (OLCF’s) Jaguar supercomputer to use data from lightning detectors and even wind instruments mounted on planes flown into the eye of a hurricane to improve atmospheric models. These simulations may lead to more accurate prediction of hurricane intensities and better preparation of the public for these inevitable disasters.
 
Reisner’s team is using a modeling and simulation software tool developed at LANL called HIGRAD to simulate and track individual liquid or solid particles in either a Lagrangian or Eulerian framework. Simply put, a Lagrangian frame of reference follows an individual fluid parcel as it moves through space and time, whereas a Eulerian framework focuses on a specific location in which the fluid flows over time. The Lagrangian frame of reference allows HIGRAD to look at individual water particles, permitting a more realistic representation of cloud structure within hurricanes.
 
The virtual particles include those the hurricane has generated from the condensation process. These particles can grow, collide, melt, freeze, or otherwise undergo any action or interaction, any microphysical process that occurs in an actual hurricane on this minute spatial scale. In total, HIGRAD used approximately 118,000 Jaguar processors during three separate simulations. HIGRAD is the first tool used to build a three-dimensional model of the lightning activity in a hurricane. Data measured from Rita - the fourth most intense Atlantic hurricane on record (Katrina was the sixth) - by LANL’s lightning-detection network, the Los Alamos Spherical Array (LASA), suggest a correlation between the intensification of lightning activity and the intensification rate of the storm.
Contact: Jayson Hines, hinesjb@ornl.gov
 
Boeing Uses Jaguar to Validate Aircraft Modeling Applications
Boeing researchers recently used the Jaguar supercomputer at the Oak Ridge Leadership Computing Facility to validate aerodynamics codes for airplane design, saving substantial R&D time that otherwise would be spent calculating solutions. By using Jaguar, the world’s third-fastest computer, to simulate various takeoff and landing scenarios, the team validated and improved several aerodynamics codes, saving the company time and money and possibly influencing the process by which next-generation Boeing aircraft are designed and manufactured.
 
Led by Boeing’s Moeljo Hong and John Bussoletti, the suite of computational fluid dynamics simulations not only assists in more efficient modeling and possibly design of passenger and military aircraft, but also gives Boeing an advantage that only high-performance computing (HPC) can provide. And by using its INCITE allocation on Jaguar to model airplanes more accurately, Boeing hopes to make them safer and increase their fuel efficiency, reducing US dependence on foreign oil and fossil fuels in general. The team’s project yielded several interesting discoveries. Besides greatly assisting engineers in their case to procure more simulation resources, the research proved that leading aerodynamics applications can scale to leadership-class systems and provide insights critical to understanding the complex phenomena associated with aircraft aerodynamics.
 
Boeing’s research was different from most of Jaguar’s allocations in that each simulation used only a small portion (fewer than 10,000) of Jaguar’s more than 200,000 cores. However, each study solved approximately ten different problems. Had each of these problems been solved simultaneously, said Bussoletti, the team could have used more than 50 percent of Jaguar’s two-plus petaflops of computing power. The team is handing over its results to Boeing R&D to be analyzed for their value in the design, testing, and manufacturing processes.
Contact: Jayson Hines, hinesjb@ornl.gov
 
Sandia Releases New Version of Coopr Optimization Library
Sandia National Laboratories (SNL) released the third major version of its open-source Coopr optimization library on July 18, 2011. Based partially on ASCR Base Math research, Coopr is a Python-based library for modeling and solving mathematical optimization problems, including linear and non-linear optimization problems, both with and without parameter uncertainty. Coopr is being used in both undergraduate and graduate classroom environments, by researchers at numerous U.S. and international universities, and for applications at SNL and Lawrence Livermore National Laboratory (LLNL).
 
Papers describing the core modeling library (Pyomo) and stochastic optimization library (PySP) are respectively scheduled to appear in and pending minor revisions with the Springer journal Mathematical Programming Computation. Further information can be found at https://software.sandia.gov/trac/cooprExternal link
Contact: Jean-Paul Watson, jwatson@sandia.gov
 

PEOPLE:

Marc Snir Named New Director of Argonne’s Mathematics and Computer Science Division
Marc SnirMarc Snir, currently the Michael Faiman and Subura Muroga Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign (UIUC), has been named the new director for the Mathematics and Computer Science (MCS) Division at Argonne National Laboratory. Snir joined Argonne on September 1. Ewing (“Rusty”) Lusk, who has served as MCS Division director since 2006, is returning to full-time research in the division.
 
Snir received his Ph.D. in mathematics from Hebrew University of Jerusalem. For more than a decade, he worked in the IBM Research Division, where he received two Outstanding Innovation Awards for his research on hierarchical memory models and on parallel system architecture and software structure, and a Corporate Award for work on scalable parallel communications and software technologies. At UIUC, Snir chaired the Department of Computer Science from 2001 to 2007, and he has been heavily involved in several recent research initiatives. He was the first director of the Illinois Informatics Institute, co-director of the Intel and Microsoft Universal Parallel Computing Research Center, and co-director of the Illinois‐INRIA Center for Petascale Computing. Snir is a fellow of the American Association for the Advancement of Science, the Association for Computing Machinery, and the Institute of Electrical and Electronics Engineers. Snir will continue to be involved with the Blue Waters project at Urbana-Champaign and will continue his appointment as professor at UIUC.
 
Vern Paxon
Berkeley’s Vern Paxson Honored for Contributions to Internet Measurement, Security
Vern Paxson, who holds joint appointments at the University of California, Berkeley, the International Computer Science Institute, and Berkeley Lab’s Computational Research Division, has been named recipient of this year’s ACM SIGCOMM Award “for his seminal contributions to the fields of Internet measurement and Internet security, and for distinguished leadership and service to the Internet community.”
 
Sandia and LANL Researchers Co-Author Book Chapter
As part of the ASCR base math program, a collaboration between Sandians Pavel Bochev, Denis Ridzal, and Guglielmo Scovazzi and Mikhail Shashkov of Los Alamos National Laboratory led to the development of a new class of optimization-based algorithms for constrained interpolation (remap). The Sandia-LANL team was invited to submit a chapter describing this work for the second edition of the book Flux Corrected Transport, Principles, Algorithms and Applications by Springer Verlag. The chapter has been recently completed and will appear in the book later this year.
Contact: Pavel Bochev, pbboche@sandia.gov
 
Berkeley Lab Team Wins Best Paper Award at International Symposium
Khaled Ibrahim, Steven Hofmeyr, and Costin Iancu of Berkeley Lab’s Computational Research Division received the Best Paper Award at CCGRID’11, the IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing 2011External link. In their paper on “Characterizing the Performance of Parallel Applications on Multi-Socket Virtual Machines,”External link the authors described how virtualization allows easier resource management and consolidation and is a key enabling technology for cloud and green computing.
 
“High performance computing workloads typically pose many challenges to virtualized environments because they stress the memory system and network resources beyond the limits of commercial workloads,” they wrote. To address these problems, the researchers experimentally characterized the main causes of performance degradation of virtualization technologies on non-uniform memory access (NUMA) systems. They showed how to significantly improve the performance on these environments using a specialized runtime mechanism that facilitates the adherence to memory locality.
 
Book on Science of Music Cites LBNL’s Keith Jackson’s Work with Mickey Hart
A few years ago, Berkeley Lab Computational Research Division computer scientist Keith Jackson, who is also a musician, contributed to “Rhythms of the Universe,” a musical project by Grateful Dead percussionist Mickey Hart to “sonify” the universe. The project was performed at the Cosmology at the Beach Conference held January 11-15, 2010 in Playa del Carmen, Mexico, sponsored by George Smoot’s Berkeley Center for Cosmological Physics. Jackson converted the electromagnetic data from exploding supernova light waves by slowing down the frequency and elongating or “stretching” it into audio form; Hart then took the sounds and incorporated them into his musical composition. (Read the 2010 Berkeley Lab news feature.)External link
 
That collaboration has been cited in the recent book The Power of Music: Pioneering Discoveries in the New Science of Song by Elena Mannes. (Read the excerpt.)External link According to Mannes, we are at a breakthrough moment in music research, for only recently has science sought in earnest to understand and explain the power of music and its connection to the body, the brain, and the world of nature. Music may even contain organizing principles of harmonic vibration that underlie the cosmos itself, as exemplified in Jackson and Hart’s supernova music.
 
Jay Srinivasan Named NERSC’s Computational Systems Group Lead
Jay Srinivasan has been selected as the Computational Systems Group Lead in the NERSC Systems Department. In this role, he will supervise the day-to-day operation of all of NERSC’s computer systems. Srinivasan has over 15 years of experience in high performance computing both as a user and administrator. Since joining NERSC in 2001, he has worked on all the large systems from Seaborg (IBM SP) to Hopper (Cray XE6) and was the system lead for the Jacquard (LinuxNetworx) system. Most recently, Jay was the team lead for the PDSF cluster that supports nuclear physics and high energy physics. Prior to NERSC, Srinivasan worked at the Supercomputing Institute at the University of Minnesota, where he received his Ph.D. in chemical physics from the University of Minnesota.
 
Argonne Computational Scientist Discusses Issues in Software Tool Interoperability
Tim Tautges, a computational scientist in the Mathematics and Computer Science Division at Argonne National Laboratory, was one of three researchers recently featured in a Research Computing & Engineering (RCE) podcast. As part of the DOE SciDAC Interoperable Tools for Advanced Petascale Simulations (ITAPS) project, Tautges and his colleagues have been developing software components that will help applications scientists use various meshing algorithms without having to rewrite code or deal with infrastructure problems.
 
According to Tautges, this research is less about “blazing a path through the wilderness” - that is, running an application on the world’s fastest computer - and more about ensuring interoperability among multiple tools that applications developers can choose from, without being tied to specific data structures. The objective is to make tools usable for model development on a workstation all the way up to advanced simulation on a supercomputer. Tautges and his colleagues are working with scientists to introduce the software tools into many different application areas. Initials successes have been achieved in complex meshing problems arising in nuclear fusion, groundwater simulation, and computational biology.
Listen to the podcast.External link
 

FACILITIES/INFRASTRUCTURE:

New ESnet Portal Gives Users Insight into Network’s Inner Workings
As science becomes increasingly data-driven and collaborative, researchers are increasingly dependent on fast and reliable networking. To provide its international user community with a clear and detailed view of their network status, the Department of Energy’s Energy Sciences Network (ESnet) has launched MyESnetExternal link, a portal offering a wide range of tools to advance the exchange of scientific information.
 
Managed by Lawrence Berkeley National Laboratory, ESnet provides the high-bandwidth, reliable connections that link scientists at national laboratories, universities and other research institutions.
 
“The MyESnet portal combines the two things central to ESnet’s mission - providing information and serving our diverse community,” said Steve Cotter, head of ESnet. “Over the years our users have requested the ability to better see and use our network, and we listened. This portal will help ESnet users become more familiar with our network and services, an important goal for ESnet.”
 

OUTREACH & EDUCATION:

ASCR Researchers Organize Women in Mathematics Workshop
Carol Woodward (LLNL) and Karen Devine (SNL), along with Cammey Cole Manning (Meredith College), Andrea Bertozzi (UCLA), and Maria Emelianenko (George Mason U.), organized the Association for Women in Mathematics’ Workshop for Graduate Students and Recent Ph.D.s at ICIAM’11 in Vancouver, BC, in July. With this year’s theme being “Opportunities Beyond Academia,” the workshop focused on career options in industry and the national laboratories, as well as strategies for academics to collaborate with non-academic researchers.
 
Featured speakers were Cynthia Phillips (SNL), Kristyn Maschhoff (Cray), Randy Leveque (U. Washington), and Brenda Dietrich (IBM). Seven recent Ph.D.s and 10 graduate students were funded to attend the workshop, where they presented their research and were paired with senior women in mathematics for mentoring and networking opportunities. The workshop was sponsored by the U.S. Office of Naval Research and the U.S. Department of Energy, as well as from the Centre de Recherche Mathématiques and the Pacific Institute of Mathematics.
Contact: Karen Devine, kddevin@sandia.gov
 
INCITE Manager Shares Best Practices for Allocating Computing Time
Julia White of Oak Ridge National Laboratory (ORNL) met with her counterparts at the HPCWorld consortium in Brussels, Belgium, on July 13 to share U.S. best practices in allocating access to high performance computers. White is program manager of the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, which is jointly run by the Argonne and Oak Ridge Leadership Computing Facilities and provides extremely large single-award allocations of computer time to researchers around the world who compete for the limited resource. Researchers’ requests total three times the available leadership-class computing time, according to White. The July meeting concluded an 18-month exchange of management practices for allocating time on supercomputers.
 
As the INCITE program is responsible for allocating time on some of the most powerful U.S.-based supercomputers for open science, White’s in-depth knowledge of allocation procedures and policies for gaining access to these resources was sought by HPCWorld. Some of INCITE’s best practices have already been adopted by corresponding programs in Europe.
Contact: Jayson Hines, hinesjb@ornl.gov
 
ORNL’s Industrial HPC Partnerships Program Reaches New Audiences
The ORNL Industrial HPC Partnerships Program has fostered collaboration and innovation between government and the private sector for the past two years. Recently project director Suzy Tichenor reached out to new audiences as she traveled to Stuttgart, Germany, and Denver, Colorado, to showcase the program’s expansion and achievements.
 
The partnerships program aims to keep the United States competitive globally by giving companies access to leadership computing resources. “Our reach into industry is gaining visibility as we gain traction,” said Tichenor, who was the only Department of Energy (DOE) representative at the International Workshop for Industrial High-Performance Computing in Stuttgart. Representatives from university and government supercomputing centers in Europe, Asia, and the United States came together June 27–28 to present their respective programs, explain how companies use various supercomputing resources, and discuss industrial computing needs in different environments.
 
The next stop was Denver, where the sixth annual Scientific Discovery through Advanced Computing (SciDAC) conference convened July 10–14. SciDAC brought together a distinguished network of computational scientists and researchers for technical talks, collaboration, and networking. Tichenor chaired the conference’s industrial panel.
Contact: Jayson Hines, hinesjb@ornl.gov
 
Webinar Introduces ORNL’s Titan to User Community
Users got a first glimpse of ORNL’s next-generation leadership-class supercomputer at a July 26 webinar. In a significant step toward exascale computing, the OLCF is upgrading Jaguar, a Cray XT5 machine that is currently the third-fastest computer in the world. When the upgrades are completed, the new version, Titan, is expected to reach 10 to 20 thousand trillion calculations a second (10 to 20 petaflops) and achieve up to nine times the performance of today’s Jaguar.
 
Bronson Messer, a senior research and development staff member in the OLCF Scientific Computing Group, presented an overview on the transformation of Jaguar into Titan and the planned timeline for the staged upgrades. Robert Whitten, HPC user support specialist and education program manager for the OLCF, spoke about training opportunities and workshops for future Titan users. The 67 attendees included current and prospective users and academic institution and industry representatives, as well as personnel from DOE and National Science Foundation facilities.
 
“There was a need to inform users of our plans so they can prepare their plans and research goals around the Titan upgrade and the necessary interruptions to Jaguar and the attendant downtime and unavailability of the machines to the users,” said Whitten.
 
The event, Titan SummitExternal link, was held August 15–17, 2011. Future workshops, live and webcast, will be offered in October, November, and December, with the annual spring training taking place in March 2012.

 

 

 

 

 

 

Last modified: 3/18/2013 10:12:32 AM