2010

April

ASCR Monthly Computing News Report - April 2010



 
 
 
 

RESEARCH NEWS:

INCITE Project Determines Larger Protein Structures Using Backbone-Only Data
Using Argonne Leadership Computing Facility (ALCF) resources awarded through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, a team of researchers including David Baker of the University of Washington has successfully developed an algorithm to determine structures of large (100-200 amino acid) proteins with sparse backbone-only nuclear magnetic resonance (NMR) data. NMR protein structure determination becomes challenging for structures above ~150 amino acids in size, due to the slow tumbling time of large molecules. The team of INCITE researchers suggested that this problem can be overcome by relying on data from fully or partly deuterated proteins. This data is usually too sparse for conventional structure determination but suffices with the Rosetta high-resolution modeling approach to solve the protein structures.
 
The results of their work were published in “NMR Structure Determination for Larger Proteins Using Backbone-Only Data” in the Feb. 19, 2010 issue of Science magazine (Vol. 327, No. 5968, pp. 1014-1018).(www.sciencemag.orgExternal link). Read the Science articleExternal link. (subscription required)
Contact: David Baker, dabaker@u.washington.edu
 
Agarwal Vis featured on Cover of Journal of Physical Chemistry
A visualization by Pratul Agarwal of Oak Ridge National Laboratory (ORNL) in the article “Computational Identification of Slow Conformational Fluctuations in Proteins” was featured as the cover image for the Dec. 31, 2009 issue of Journal of Physical Chemistry B. The article discusses how computational simulations are used to identify long time-scale conformational fluctuations in the protein ubiquitin and the enzyme cyclophilin A. Results are providing new insights into how the internal motions promote protein function.
 
 
Berkeley Lab Staff Present Five Papers at IPDPS
Lawrence Berkeley National Laboratory (LBNL) Computing Sciences staff presented five papers at the 24th IEEE International Parallel and Distributed Processing Symposium (IPDPS)External link held April 19–23 in Atlanta:
  • Samuel Williams and Leonid Oliker co-authored “Optimizing and Tuning the Fast Multipole Method for State-of-the-Art Multicore Architectures” along with Aparna Chandramowlishwaran, Ilya Lashuk, George Biros, and Richard Vuduc of Georgia Tech.
  • Shoaib Kamil, Cy Chan, Leonid Oliker, John Shalf and Samuel Williams co-authored “An Auto-Tuning Framework for Parallel Multicore Stencil Computations.”
  • Andrew Uselton, Mark Howison, Nicholas J. Wright, David Skinner, Noel Keen, John Shalf, Karen L. Karavanic (of the San Diego Supercomputer Center) and Leonid Oliker co-authored “Parallel I/O Performance: From Events to Ensembles.”
  • Costin Iancu, Steven Hofmeyr, Yili Zheng and Filip Blagojevic co-authored “Oversubscription on Multicore Processors.”
  • Deb Agarwal and Keith Jackson, along with collaborators from the University of Virginia, Microsoft Research and UC Berkeley, co-authored “eScience in the Cloud: A MODIS Satellite Data Reprojection and Reduction Pipeline in Windows Azure Platform.”
 
Petascale Computer Modeling Tracking Contaminant Transport at Hanford
Recent findings using PFLOTRAN, a powerful massively parallel computer model, are strengthening researchers’ ability to accurately predict groundwater contaminant migration at the Department of Energy’s Hanford site, a former nuclear weapons production complex.
 
Researchers at Pacific Northwest and Los Alamos National Laboratories are using the PFLOTRAN code to simulate the migration of contaminants in groundwater at the Hanford 300 Area and estimate the release of uranium to the neighboring Columbia River. For a conceptual model depicting current conditions near the South Processing Pond—where a uranium plume intersects the Columbia and continuous source regions exist—PFLOTRAN estimated that uranium leaches into the river at a rate of 25 kg/year. The estimate is well within the range of estimates based on field studies (i.e., 20-50 kg/year by Peterson et al., 2009).
 
Stochastic simulations using PFLOTRAN demonstrated that the amount of uranium transported into the river is less sensitive to small-scale (e.g., meter-scale) geologic heterogeneity than originally thought. This result is surprising considering that the geology of the site is composed of heterogeneous cobbles, gravels, and fine sands. This lower sensitivity is likely due to the accumulation of leached uranium over a kilometer-scale region at the river’s edge, which is much larger than the scale of the heterogeneity in the model. Read more External link
For more information, contact: Glenn Hammond, glenn.hammond@pnl.gov
 
Berkeley Water Center to Be Featured in Microsoft’s Silicon Valley TechFair
The Berkeley Water Center, a joint effort between LBNL’s Advanced Computing for Science Department and UC Berkeley with funding from Microsoft Research, will be highlighted at the May 6 Silicon Valley TechFair. The Berkeley Water Center promotes and supports collaborative, water-related research within the Berkeley research community. The LBNL project team has helped developed a data server providing an integrated view of discharge, precipitation, stream temperature and air temperature across California watersheds. This data server is now being used to support researchers working on a variety of questions including impact of frost protection pumping, recovery of endangered fish populations, long term impact on a watershed of human activity, coastal lagoon dynamics, and modeling of annual watershed water balance. The work has also been used to help develop the statewide recovery plan for California’s wild Coho salmon.
 
The work will be highlighted in the keynote address at the TechFair, and will be featured in both a demo and a five-minute video being prepared for the event to be held at the Microsoft Silicon Valley Campus in Mountain View, Calif. This year’s event will include a broad audience of academics, partners, industry and government representatives, and press.
Contact: Deb Agarwal, DAAgarwal@lbl.gov
 
Nimbus Cloud Deployment Wins Challenge Award at Grid5000 Conference
The Nimbus toolkit, developed by researchers at Argonne National Laboratory and the University of Chicago, played a major role at the Grid5000 conference held April 6-9 in Lille, France. Grid5000 is a highly reconfigurable testbed for studying large-scale parallel and distributed systems and comprises thousands of nodes geographically distributed over nine sites in France and one site in Brazil.
 
At the 2010 Grid5000 conference, Pierre Riteau, a student from the University of Rennes, used the unique properties of the testbed to deploy Nimbus over hundreds of nodes at three different Grid5000 sites and to create a distributed virtual cluster, a technique that makes multiple physical systems appear to function as a single logical system. This deployment resulted in distributed cloud deployment of unprecedented scale and won him a Grid5000 Large Scale Deployment Challenge award.
 
Kate Keahey, a computer scientist in Argonne’s Mathematics and Computer Science Division and lead developer of Nimbus, said that the deployment of Nimbus on the Grid5000 was one of the largest to date, involving hundreds of nodes on each of the three Grid5000 sites. Moreover, deploying a virtual cluster over those sites creates a distributed yet easy-to-use environment with interesting properties — what has been described as a “sky computing” cluster, as it combines the several clouds and opens up even more computational opportunities for scientists. Nimbus provides an “Infrastructure-as-a-Service” (IaaS) cloud computing solution, which includes an extensible architecture specifically designed for scientists, enabling them to customize components to meet large-scale project needs. Each cloud provides computing cycles and storage resources to support real-time demands. The paradigm of cloud computing, more familiar to the commercial world, has long been generating considerable interest in the scientific community.
 
LBNL Analytics Staff Speak at SIAM Conference on Imaging Science
Daniela Ushizima and Mark Howison, both members of the National Energy Research Scientific Computing Center (NERSC) Analytics Team and LBNL Computational Research Division (CRD) Visualization Group, presented papers in a session (organized by Ushizima) on “Modeling and Analysis of Biomedical Images” at the SIAM Conference on Imaging Science (IS10)External link, April 12-14 in Chicago.
 
Ushizima presented a paper on “Retinopathy Diagnosis from Ocular Fundus Image Analysis,” co-authored with Fatima Medeiros of the Federal University of Ceará, Brazil. Howison presented his paper on “Comparing GPU Implementations of Bilateral and Anisotropic Diffusion Filters for 3D Biomedical Datasets.”
 

PEOPLE:

LBNL’s Horst Simon Joins Scientific Discussion with German Chancellor Merkel
During a two-hour visit to Berkeley Lab on April 15, German Chancellor Angela Merkel met with Lab Director Paul Alivisatos and several German scientists at the lab, including Associate Lab Director for Computing Sciences Horst Simon. In a frank discussion, Merkel was interested in learning whether approaches used to foster research innovation at LBNL could be adapted to strengthen Germany’s scientific community and infrastructure. Merkel also posed for a group photo with more than 70 German-born scientists now working at Berkeley Lab.
 
LBNL’s Hank Childs Gives Four Presentations on Path to Petascale Visualization
Hank Childs of Berkeley Lab’s Visualization Group has been on the road, giving a number of presentations on how the lab’s visualization experts are preparing to analyze and visualize data at the petascale.
 
Childs will give a presentation at EGPGV, the Eurographics Symposium on Parallel Graphics and Visualization,External link May 2–3 in Norrköping, Sweden. The paper, “MPI-Hybrid Parallelism for Volume Rendering on Large, Multi-Core Systems,” was written by Mark Howison, Wes Bethel and Childs, all of Berkeley Lab’s Visualization Group. This paper presents results of a study that compares performance in terms of absolute runtime, memory footprint, and communication load of a traditional MPI-only implementation of a raycasting volume rendering with one designed and implemented using hybrid parallelism, which is a blend of MPI and shared-memory parallelism. The team’s results show scalability out to 216K-way parallel, which is six times greater than anything previously published, and also reveal the hybrid-parallelism approach offers distinct performance and resource utilization advantages over the MPI-only approach. These results are significant because they help lay the groundwork for visualization codes to move beyond the petascale. In addition:
  • On April 13, Childs presented “Experiments with Pure Parallelism: Results from the VisIt Hero Runs” at the DOE Computer Graphics Forum in Park City, Utah.
  • At the NSF Extreme Scale I/O and Data Analysis WorkshopExternal link held March 22-24 in Austin, Texas, Childs presented “Petascale I/O Impacts on Visualization.”
  • At the SIAM Parallel Processing for Scientific Computing (PP10) conference held February 24–26 in Seattle, Childs presented “The Challenges Ahead for Visualizing and Analyzing Massive Data Sets.”
 

FACILITIES/INFRASTRUCTURE:

NERSC and JGI Join Forces to Tackle Genomics HPC
A torrent of data has been flowing from the advanced sequencing platforms at the Department of Energy’s Joint Genome Institute (JGI), among the world’s leading generators of DNA sequence information for bioenergy and environmental applications. Last year, JGI generated over one trillion nucleotide letters of genetic code for its various user programs, an eight-fold increase in productivity from 2008. This year JGI expects to sequence five times more data than the previous year, producing more than a petabyte of data.
 
To ensure that there is a robust computational infrastructure for managing, storing and gleaning scientific insights from this data, JGI is joining forces with the NERSC Division at Berkeley Lab. The NERSC Division will perform this work on a cost-recovery basis similar to the way it manages and supports the Parallel Distributed Systems Facility (PDSF) cluster for High Energy and Nuclear Physics research. Computing systems will be split between JGI’s campus in Walnut Creek, Calif. and NERSC’s Oakland Scientific Facility, which are 20 miles apart. The NERSC Division will also manage JGI’s six-person systems staff to integrate, operate and support these systems.
 
ESnet Reaches First Milestone in ANI Deployment with DWDM Installation
In March of this year, the DOE’s Energy Sciences Network (ESnet) completed the first milestone in constructing its Advanced Network Initiative (ANI) testbed by installing Infinera’s dense wavelength-division multiplexing (DWDM) equipment. DWDM refers to optical networking systems that can send large volumes of data over multiple wavelengths of light on a single fiber. The ANI testbed will support research on the data, control, management, authentication/authorization and service planes. It will be initially capable of running 10 Gbps and eventually 40 Gbps and 100 Gbps circuits.
 
Funded by $62 million under the American Recovery and Reinvestment Act (Recovery Act), the ANI testbed will be a community resource for scientists, technologists, and industry to conduct network, middleware, and application research and development. The ANI testbed, launched in September 2009, is being designed, built and operated by ESnet network engineers. The testbed will start out as a tabletop testbed at Berkeley Lab, and later be deployed in the metro area. ESnet is funded by the DOE Office of Advanced Scientific Computing Research and managed by Berkeley Lab.
 
OLCF Completing Chilled Water Upgrade
The Oak Ridge Leadership Computing Facility (OLCF), home of the Cray XT5 known as Jaguar, has upgraded its the existing chilled water loop system. The upgrade will add a new 12-inch supply and return line to the existing chilled water line loop in the OLCF’s Computer Science Building (CSB) for greater cooling capacity. Specifically, the new supply line provides enough chilled water to remove 5 megawatts of heat, or about two-thirds of Jaguar’s total cooling requirement. The purpose of the upgrade is three-fold: it will give the OLCF the ability to further segment the room that houses the center’s supercomputers as necessary; it will allow maintenance of the CSB’s central energy plant, where the chillers are housed, without disrupting the systems currently running simulations; and the new cooling capability will provide greater redundancy with the chilled water system, immediately benefitting a new supercomputing system planned for the CSB this summer. All in all, the upgrade will give the OLCF’s chilled water system a total cooling capacity of 23 megawatts. Read more External link
Contact: Jayson Hines, hinesjb@ornl.gov
 
ORNL Adds High-Capacity Data Storage Library
As the power of supercomputers housed at Oak Ridge National Laboratory (ORNL) increases, the amount of data they generate is also growing rapidly. In response, the Oak Ridge Leadership Computing Facility (OLCF) recently installed a new High Performance Storage System (HPSS) tape library to accommodate additional growth within the HPSS archive. With a footprint of 24 feet by 6 feet, the library will eventually hold up to 10 thousand trillion bytes of data (10 petabytes)—roughly equivalent to the size of 6 million downloaded movies.
 
The library went into service April 14 with 16 tape drives. It can hold up to 64 drives, each capable of moving 120 megabytes of data per second, for a total bandwidth of over 7 gigabytes a second. It is the first of two new tape libraries planned for ORNL, joining three older libraries to serve ORNL leadership systems such as Jaguar, operated by the OLCF and ranked number one in the world, and Kraken, operated by the National Institute for Computational Sciences and ranked number three in the world. According to ORNL’s Kevin Thach, two of those earlier libraries are full, and the third will be full by the end of 2010. The library that went into service April 14 will most likely be full sometime in mid-2011. “We’re roughly doubling in size each year,” Thach said. “This time next year we expect to be at 20 petabytes.”
Contact: Jayson Hines, hinesjb@ornl.gov
 
DOE EStar Award Recognizes Innovative Supercomputer Cooling
An innovative, energy-saving approach to cooling Argonne’s Blue Gene/P supercomputer was recognized with an Environmental Sustainability (EStar) award from the U.S. Department of Energy’s (DOE) Office of Science. EStar awards highlight environmental sustainability projects and programs that reduce environmental impacts, enhance site operations, reduce costs, and demonstrate excellence in pollution prevention and sustainable environmental stewardship.
 
The Argonne project was one of just five EStar awards given to the DOE’s Office of Science laboratories. A total of 127 projects from across the country were nominated for the awards.
 
Much of the electricity needed to operate a supercomputer is used to cool the machinery. In colder weather, Argonne saves as much as $25,000 per month in electricity costs by leveraging the Chicago area’s climate to chill the water used to cool the supercomputer for free. That is in addition to the millions of dollars saved by the energy-efficient architecture of the Blue Gene/P, which uses about one-third as much electricity as a comparable supercomputer.
 
INCITE Program Accepting Proposals for Access to World-Class HPC Resources
On April 14, the U.S. Department of Energy (DOE) announced that it was accepting proposals for the 2011 Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. Aimed at advancing high-impact science via some of the world’s most powerful supercomputers located at DOE-supported national laboratories, INCITE helps large-scale, computationally intensive projects address some of the most pressing contemporary issues in science and engineering. With 470 million hours allocated in 2009, the 2010 INCITE projects represent nearly a 50 percent increase in processor-hour allocations at the Oak Ridge Leadership Computing Facility (OLCF).
 
Through INCITE, the OLCF is currently providing 45 research teams with approximately 950 million processor hours on the Cray XT5 known as Jaguar—the fastest supercomputer in the world with a peak performance speed of 2.33 petaflops. Current INCITE projects at the OLCF span the range of the domain sciences, as researchers use Jaguar to search for cleaner nuclear technologies, investigate the molecular structure of solar cells, and study energy storage in carbon nanotubes, to name a few. INCITE applicants can request allocations for one to three years, and current DOE sponsorship is not required for the program. Researchers may submit proposals until June 30. Recipients are expected to be announced in November. For help with a proposal for an INCITE allocation at OLCF, please contact help@nccs.gov.
 

OUTREACH & EDUCATION:

ALCF, OLCF to Host INCITE Proposal Writing Webinar on May 17th
Over one billion processing hours are available through DOE’s INCITE program for 2011. To help researchers submit their strongest proposals, an INCITE 2011 Proposal Writing Webinar will be held from 2:30-4:30 p.m. May 17 at Argonne National Laboratory. Katherine Riley, scientific applications engineer at the Argonne Leadership Computing Facility (ALCF), and Bronson Messer, computational astrophysicist in the Scientific Computing Group at ORNL’s National Center for Computational Sciences, will provide tips and suggestions to improve the quality of INCITE proposal submissions. INCITE 2011 proposals are due June 30, 2010. To register online, go to the ALCF websiteExternal link. When you register, please indicate if you are planning to attend in person or via the webinar tool.
 
ASCR-Supported Researchers Organize Copper Mountain Conference
Ray Tuminaro (Sandia National Laboratories) and Howard Elman (University of Maryland) co-organized the 11th Copper Mountain conference on Iterative Methods held April 4-9 in Copper Mountain, Colorado. This year’s conference was one of the larger meetings in recent memory and included 140 presentations and 187 registered participants. More than 50 students attended, and a student paper competition received 18 highly regarded submissions. Organized sessions covered a number of diverse areas such as multicore architectures, large-scale optimization and inverse problems, nonlinear solvers, multi-physics applications, magnetohydrodynamics, environmental science, multigrid for large data sets, and Krylov solver re-use.
 
This meeting is organized by Front Range Scientific Computations, Inc. and receives financial support from the Department of Energy, the National Science Foundation, IBM, and Lawrence Livermore, Los Alamos and Sandia national laboratories. It is held in cooperation with the Society for Industrial and Applied Mathematics (SIAM), and a special issue of the SIAM Journal on Scientific Computing is devoted to the meeting.
 
ALCF Offers Leap to Petascale Workshop in May
The Argonne Leadership Computing Facility (ALCF) is offering a “Leap to Petascale” workshop on May 18-20 at Argonne National Laboratory. Discretionary project teams that have already scaled to at least two racks on the ALCF’s Blue Gene/P, Intrepid. The bulk of this specialized workshop will be devoted to tuning applications to scale up to 40 racks on Intrepid. In addition:
  • ALCF’s performance engineers and computational scientists will provide hands-on support.
  • Special queues will be available to accommodate full-scale runs.
  • Tool and debugger developers will be on hand to assist users.
  • Presentations on system architecture, tools, and debuggers will be made as needed.
As part of registration, users will be asked to provide basic information about their current projects.
Contact: David Martin, dem@alcf.anl.gov or Chel Lancaster, chel@alcf.anl.gov
 
INCITE Manager Participates in Key Summit
Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program manager Julia White attended the High-Performance Computing (HPC) World consortium in Bologna, Italy, February 25–26 as a representative of the Oak Ridge and Argonne Leadership Computing Facilities’ INCITE programs. HPC World is a consortium of six major players in HPC—Italy, Spain, Germany, the United States, New Zealand, and France—with the goal of creating a model of best operating practices, including allocation of time on HPC resources. White was one of the few external members invited to the event.
 
“The consortium is looking at other successful programs and defining best practices,” White said. “The INCITE program has been very successful—INCITE allocations on the DOE leadership-class systems are among the largest awards of computer time made anywhere in the world—and we have an incredibly rigorous and competitive proposal and review process.”
 
The first HPC World consortium took place at the SC09 supercomputing conference in Portland, Oregon, which White also attended.
Contact: Jayson Hines, hinesjb@ornl.gov
 
Berkeley Lab’s Alice Koniges Discusses Fusion Computing at Sherwood Conference
Alice Koniges of NERSC gave an invited presentation at the 2010 International Sherwood Fusion Theory ConferenceExternal link, held April 19–21, 2010 in Seattle, Washington. The title of her talk was “What’s Ahead for Fusion Computing.” It was co-authored with John Shalf and Robert Preissl of NERSC, Stephane Ethier of Princeton Plasma Physics Lab, and the Cray Center of Excellence at NERSC and NERSC Cloud Computing teams.

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:36 AM