2009

November-December

ASCR Monthly Computing News Report - November-December 2009



In this issue...
 
 
 
 
 

 

Special Section: DOE Labs Showcase Leadership at SC09 Conference

ORNL-Led Team Takes Prize for World’s Fastest Science App

A team led by ORNL’s Markus Eisenbach was named winner of the 2009 ACM Gordon Bell Prize, which honors the world’s highest-performing scientific computing applications. Another team led by ORNL’s Edo Aprà was also among nine finalists for the prize.  Results of the contest were announced Nov. 19 at the SC09 international supercomputing conference in Portland, Oregon.  The prize is supported by high-performance computing pioneer Gordon Bell and is administered by the Association for Computing Machinery.

Eisenbach and colleagues from ORNL, Florida State University, the Institute for Theoretical Physics and the Swiss National Supercomputing Center achieved 1.84 thousand trillion calculations per second—or 1.84 petaflops—using an application that analyzes magnetic systems and, in particular, the effect of temperature on these systems.  By accurately revealing the magnetic properties of specific materials—even materials that have not yet been produced—the project promises to boost the search for stronger, more stable magnets, thereby contributing to advances in such areas as magnetic storage and the development of lighter, stronger motors for electric vehicles.  The application—known as WL-LSMS—achieved this performance on ORNL’s Jaguar, making use of more than 223,000 of Jaguar’s 224,000-plus available processing cores and reaching nearly 80 percent of Jaguar’s theoretical peak performance of 2.33 petaflops.  Aprà’s team—the other finalist led by an ORNL researcher—achieved 1.39 petaflops on Jaguar in a first principles, quantum mechanical exploration of the energy contained in clusters of water molecules.

 
IBM, LBNL Simulation of Cat-Size Cortex Wins Gordon Bell Prize

A team of researchers from the IBM Almaden Research Center and the Lawrence Berkeley National Laboratory won the prestigious ACM Gordon Bell Prize in the special category for their development of innovative techniques that produce new levels of performance on a real application.  This year’s prize winners were announced Thursday, Nov. 19, 2009 at the awards session of the SC09 conference in Portland.  The ACM Gordon Bell Prize annually recognizes the best performance of scientific applications on supercomputers.  To test the hypotheses of brain structure, dynamics and function, the team built a cortical simulator called C2 that incorporates a number of innovations in computation, memory, and communication as well as sophisticated biological details from neurophysiology and neuroanatomy, and tested it on Lawrence Livermore National Laboratory’s “Dawn” Blue Gene/P supercomputer with 147,456 CPUs and 144 terabytes of main memory.

The IBM scientists also worked closely with researchers from Stanford University to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging.  This information was used to develop an algorithm that exploits the Blue Gene supercomputing interconnection architecture.  Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information. Read more at the following link:

 
OLCF Triumphs at HPC Challenge Awards

The ORNL Leadership Computing Facility (OLCF) dominated this year’s High-Performance Computing (HPC) Challenge awards.  Results of the “Best Performance” awards, which measure excellence in handling computing workloads, were announced Nov. 17 at SC09. ORNL’s Jaguar took home the lion’s share of the honors, with three gold medals and one bronze.  Jaguar won first place for speed in solving a dense matrix of linear algebra equations by running the HPL software code at 1,533 teraflop/s (trillion floating point operations per second).  Jaguar also ranked first for sustainable memory bandwidth by running the STREAM code at 398 terabytes per second.  STREAM measures how fast a node can fetch and store information.  Jaguar’s third gold was for executing the Fast Fourier Transformation (FFT), a common algorithm used in many scientific applications, at 11 teraflop/s.

 
SDSC, UC San Diego, LBNL Team Wins SC09 'Storage Challenge' Award

A research team from the San Diego Supercomputer Center (SDSC) at UC San Diego and Lawrence Berkeley National Laboratory won the Storage Challenge competition at the SC09, conference on high-performance computing, networking, storage and analysis, The research team based its Storage Challenge submission for the annual conference on the architecture of SDSC’s recently announced Dash high-performance computer system, a “super-sized” version of flash memory-based devices such as laptops, digital cameras and thumb drives that also employs vSMP Foundation software from ScaleMP, Inc. to provide virtual symmetric multiprocessing capabilities.  Dash is the prototype for a much larger flash-memory HPC system called Gordon, which is scheduled to go online in mid-2011 at SDSC.

Berkeley Lab’s Peter Nugent and Janet Jacobsen provided the project a production database that they could use for the data challenge, and did so in a short time.  They created a snapshot of the Palomar Transient Factory (PTF) database at NERSC which contains the results of processing images from the PTF sky survey, as well as the results of analyzing the images for transient objects.  One of the keys to being able to identify transient objects for follow-up telescope time is to search rapidly through the 100 million “candidate” transient objects in the database.  For supernovae, it is particularly important to detect the supernova before it reaches its peak brightness.  This is why being able to test the speed of the query execution on the Dash compute system was of interest to the PTF collaboration, Jacobsen said.

Read more at: http://www.lbl.gov/cs/Archive/news112509.htmlExternal link
 
StarGate Demos How to Keep Astrophysics Data Out of Archival “Black Holes”

As both an astrophysicist and director of the San Diego Supercomputer Center (SDSC), Mike Norman understands two common perspectives on archiving massive scientific datasets.  During a live demonstration at the SC09 conference of streaming data simulating cosmic structures of the early universe, Norman said that some center directors view their data archives as “black holes,” where a wealth of data accumulates and needs to be protected.

But as a leading expert in the field of astrophysics, he sees data as intellectual property that belongs to the researcher and his or her home institution - not the center where the data was computed.  Some people, Norman says, claim that it’s impossible to move those terabytes of data between computing centers and where the researcher sits.  But in a live demo in which data was streamed over a reserved 10-gigabits-per-second provided by DOE’s ESnet (Energy Sciences Network), Norman and graduate assistant Rick Wagner showed it can be done.  While the scientific results of the project are important, the success in building reliable high-bandwidth connections linking key research facilities and institutions addresses a problem facing many science communities.

“This couldn’t have been done without ESnet,” Wagner said.  Two aspects of the network came into play.  First, ESnet operates the circuit-oriented Science Data Network, which provides dedicated bandwidth for moving large datasets.  However, with numerous projects filling the network much of the time for other demos and competitions at SC09, Norman and Wagner took advantage of OSCARS, ESnet’s On-Demand Secure Circuit and Advance Reservation System.

 
INCITE Program Receives HPCwire Reader’s Choice Award

DOE's INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program received an HPCwire Readers’ Choice award for “best HPC collaboration between government and industry.” Since 2003, the INCITE program has given large-scale, computationally intensive research projects access to America’s premier leadership computing facility (LCF) centers, established and operated by the Department of Energy’s Office of Science.  In 2009 through INCITE, the LCF centers at Argonne and Oak Ridge national laboratories and the National Energy Research Scientific Computing Center at Berkeley Lab allocated more than 900 million processing hours to projects from universities, industry, and government agencies with the potential to significantly advance key areas in science and engineering.

Tomas Tabor, publisher of HPCwire, announced the award during the Nov. 16 Monday night gala opening of the SC09 conference.  HPCwire is a leading news and information website covering the HPC community.

 
ANL’s Green Computing Wins 2009 HPCwire Readers’, Editors’ Choice Awards

Argonne National Laboratory has been awarded the HPCwire’s Readers’ Choice Award for Best Application of Green Computing.  The award was presented by Tomas Tabor, publisher of HPCwire, at SC09, the annual supercomputing conference held Nov. 14-20 in Portland, Ore.  The annual HPCwire Readers’ and Editors’ Choice Awards are determined through a survey conducted by HPCwire, online polling of the global HPCwire audience, along with a rigorous selection process involving HPCwire editors and industry luminaries.

Argonne compute and storage systems have “smart power” management functionality that allows them to turn off or throttle back the power consumption.  The ALCF is home to Intrepid, an energy-efficient IBM Blue Gene/P supercomputer, which uses about one-third as much electricity as a comparable supercomputer.  Argonne achieves savings in energy through a variety of innovative operational techniques, including methods employed to cool the supercomputer - a process that normally requires more electricity than powering the machine itself.  For example, the ALCF saves up to tens of thousands of dollars a month in electricity costs during the winter months by using the Chicago area’s frigid temperatures to chill the water used to cool Intrepid, alleviating the need to run power-hungry centrifugal chillers.  In addition, ALCF is working with IBM to use the warmest possible water temperature necessary to effectively cool the computer systems, leading to even greater savings and reduced environmental impact.

Contact:  Pete Beckman, beckman@alcf.anl.gov
 
ORNL Computing Garners Awards from Online Computing Publications

ORNL was named by readers of insideHPC to receive the publication’s first-ever HPC Community Leadership Award.  “ORNL has blazed a trail at the very high end of supercomputing in recent years,” said John West, the editor of insideHPC (http://www.insidehpc.comExternal link).  “Bringing together the expertise, funding, and organizational resources to build a record of sustained accomplishment at this level is a truly remarkable achievement.” West presented the award to Jeff Nichols, ORNL’s associate laboratory director for computing and computational sciences, November 16 at SC09 in Portland, Ore. “InsideHPC’s readers have highlighted the confidence that the supercomputing community has not only in what ORNL has already accomplished, but in the leadership they will provide in the future,” West said.

Jaguar and ORNL also received an HPCwire Editors’ Choice Award for “Top Supercomputing Achievement.” The award was presented Nov. 19 at SC09 to Nichols by Tomas Tabor, publisher of HPCwire (http://www.HPCwire.comExternal link).

 
LLNL Leads Development of New Tools for Debugging Large-Scale Applications

At the SC09 conference, Livermore researchers Dong Ahn, Bronis de Supinski, Ignacio Laguna, Greg Lee, and Martin Schulz, along with their collaborators Ben Liblit and Barton Miller, presented a paper on their scalable temporal order analysis technique that supports debugging of large scale applications by classifying MPI tasks based on their logical program execution order.  Their approach combines static analysis techniques with dynamic analysis to determine this temporal order scalably.  It uses scalable stack trace analysis techniques to guide selection of critical program execution points in anomalous application runs.  The novel temporal ordering engine then leverages this information along with the application’s static control structure to apply data flow analysis techniques to determine key application data such as loop control variables.  The team then uses lightweight techniques to gather the dynamic data that determines the temporal order of the MPI tasks.  Their evaluation, which extends the Stack Trace Analysis Tool (STAT), demonstrates that this temporal order analysis technique can isolate bugs in benchmark codes with injected faults as well as a real world hang case with AMG2006.  This work was described in a paper entitled “Scalable Temporal Ordering for Large Scale Debugging,”

Contact:  Lori Diachin, diachin2@llnl.gov
 
Labs Transfer Climate Data at 16 Gbps to SC09 Exhibition Floor

Competing in the SC09 Bandwidth Challenge, a team from LBNL, ANL, LLNL and the University of Utah achieved their goal of transferring 10 TBs of climate data managed by the Earth System Grid (ESG) in about 1.4 hours from three participating sites.  This amounts to a sustained transfer rate of 16 Gbps.  The transfer was accomplished over two 10 Gbps connections.  The rate per connection used 80 percent of the available bandwidth, a remarkable achievement given the overhead of storage and network setup per transfer.

This result was made possible by a combination of several technologies: Network reservation by OSCARS at ESnet, GridFTP for file transfer from ANL, BDM (Bulk Data Mover) for management of concurrent transfers from LBNL, and NetLogger for monitoring performance, also from LBNL.  Tools from the SciDAC VACET center were used to analyze and visualize the data.  Key people from LBNL involved in this challenge were: Eli Dart (ESnet OSCARS), Dan Gunter (NetLogger), Alex Sim, Viji Natarajan (BDM), Jason Hick (storage, NERSC), and Matt Andrews (data nodes, NERSC).  Key people from other institutions were: Raj Kettimuthu, Mike Links (GridFTP, ANL), Dean Williams, and Jeff Long (climate data, LLNL), Peer-Timo Bremer (Visualization, LLNL), Valerio Pascucci (Visualization, University of Utah), and Mark Adams (storage, Data Direct Networks).

 

RESEARCH NEWS:

A Superbright Supernova That’s the First of Its Kind

An extraordinarily bright, extraordinarily long-lasting supernova named SN 2007bi, snagged in a search by a robotic telescope, turns out to be the first example of the kind of stars that first populated the Universe.  The superbright supernova occurred in a nearby dwarf galaxy, a kind of galaxy that’s common but has been little studied until now, and the unusual supernova could be the first of many such events soon to be discovered.

SN 2007bi was found early in 2007 by the international Nearby Supernova FactoryExternal link (SNfactory) based at Lawrence Berkeley National Laboratory (LBNL).  The supernova’s spectrum was unusual, and astronomers at the University of California at Berkeley subsequently obtained a more detailed spectrum.  Over the next year and a half the Berkeley scientists participated in a collaboration led by Avishay Gal-Yam of Israel’s Weizmann Institute of Science to collect and analyze much more data as the supernova slowly faded away.

Rollin Thomas of LBNL’s Computational Research Division and a member of the Lab’s Computational Cosmology Center and the SNfactory, aided the early analysis, using the Cray XT4 supercomputer “Franklin” at NERSC to run a code he developed to generate numerous synthetic spectra for comparison with the real spectrum.  The analysis indicated that the supernova’s precursor star could only have been a giant weighing at least 200 times the mass of our Sun and initially containing few elements besides hydrogen and helium – a star like the very first stars in the early Universe.  To read more, see the following link:

 
Berkeley Lab Experts Lead Development of Scientific Data Management Book

A new book, Scientific Data Management: Challenges, Technology, and Deployment, edited by Arie Shoshani and Doron Rotem, of the Berkeley Lab’s Scientific Data Management Group, provides a comprehensive understanding of the challenges of managing data during scientific exploration processes, from data generation to data analysis.  With real-world examples of applications drawn from high energy physics, material science, astrophysics, cosmology, climate modeling, ecology, biology and more, the book presents effective cutting-edge technologies that address challenges to data management, including storage and file systems; efficient retrieval, analysis and visualization techniques; and scientific workflows and provenance management.

The book, published in early December, is a volume in the Chapman & Hall/CRC Computational Science Series, which is edited by Horst Simon of the Lawrence Berkeley National Laboratory. For more information read the Amazon listingExternal link

Contact:  Arie Shoshani, shoshani@lbl.gov
 
LLNL Researchers Develop Improved Discretization for Seismic Wave Simulations

Livermore mathematicians Anders Petersson and Bjorn Sjogreen recently completed work on the development of a new energy conserving discretization for the elastic wave equation for composite and refined grids.  These grids consist of a set of structured rectangular component grids with hanging nodes on the grid refinement interface.  Previously developed summation by parts properties were generalized to devise a stable second order accurate coupling of the solution across mesh refinement interfaces.  The discretization of singular source terms of point force and point moment tensor type has also been studied on these grids.  Previous single grid formulas have been generalized to work in the vicinity of grid refinement interfaces based on enforcing discrete moment conditions that mimic properties of the Dirac distribution and its gradient.  These source discretization formulas are shown to give second order accuracy in the solution, with the error being essentially independent of the distance between the source and the grid refinement boundary.  This work has been submitted to Communications of Computational Physics Journal.

Contact:  Lori Diachin, diachin2@llnl.gov
 
LLNL, LBNL Researchers Develop New Methods for Processing Massive Datasets

Livermore researchers Martin Isenburg and Peter Lindstrom and Berkeley Lab researcher Hank Childs developed a streaming, parallel algorithm for padding subdomains with one or more layers of ghost data to allow subsequent stream processing of independent subdomains on a parallel machine.  This algorithm has been integrated into VisIt, enabling VisIt to visualize and analyze huge datasets both in parallel and out-of-core.  VisIt, like other production parallel visualization tools, previously required the whole dataset to fit in aggregate memory before ghost data could be computed and communicated, and before subdomains could be streamed one at a time through an analysis pipeline.  With the new ghost data computation being executed out-of-core, data analysis can now begin virtually immediately, and on as many processors as are available.

Linear scalability has been demonstrated, from a single to 240 processors, in isocontouring and gradient computations on a 27 billion zone data set using a small memory footprint of 250 MB per processor.  By comparison, VisIt’s in-core method required a minimum of 128 processors and 6 GB of RAM each in order to fit the dataset and complete the computation. Moreover, by overlapping I/O with computation this out-of-core method achieved a 40 percent speedup over VisIt’s brute-force in-core approach.

This work will appear in IEEE Computer Graphics and Applications, Special Issue on Ultrascale Visualization, May-June 2010.

Contact:  Lori Diachin, diachin2@llnl.gov
 
PNNL Conducting Petascale Hierarchical Modeling via Parallel Execution

Researchers at PNNL are collaborating with Columbia University and Interactive Super- computing to develop scalable approaches to hierarchical Bayesian modeling.  Bayesian estimation is used in many scientific settings to draw conclusions based on uncertain or partially observed data, such as identifying compounds in an unknown sample, or detecting rogue computers on a network.  Although widely applied to small problems using software such as BUGS and HBC, mathematical advances are necessary to apply such methods to petascale data.  Even with huge data sets, sparsity within categories is a concern. Motivations for petascale computation include estimating the probabilities of very rare events, estimation within small slices of the population, and inference for complex systems.

Researchers initially are targeting Liquid Chromatography-Mass Spectrometry (LC-MS) and computer network intrusion detection as case studies.

For more information, contact: Chad Scherrer chad.scherrer@pnl.gov
 
ADIOS: ORNL-Led Research Enhances Scaling of Codes

To address the scaling and I/O issues that plague today’s leading software packages, a team of researchers from ORNL, Georgia Tech, and Rutgers University developed the ADaptable I/O file System (ADIOS).  ADIOS is an I/O middleware package that has shown great promise in astrophysics with the CHIMERA code and with leading fusion codes, scaling up to 140,000 cores for XGC-1 and helping to enhance GTC and GTS.  Recently, ADIOS made its mark in the field of combustion.  Researchers at Sandia National Laboratories (Ray Grout, Jackie Chen and Chun Sang Yoo) and Andrea Gruber of SINTEF, the largest independent research organization in Scandinavia, are using the leading combustion code S3D to perform the first direct numerical simulations of reacting jets in cross flow.  These transverse jets are a class of flows used in practical applications in which high mixing rates are desirable — for example, in fuel injection nozzles in stationary gas turbines for power generation or in aero-gas turbines.

Though the S3D team was able to fully scale the code to ORNL’s Jaguar, the scaling of the I/O proved to be difficult.  Enter ADIOS.  Besides its ability to scale, ADIOS’s BP file format is resilient to failures in the compute nodes and the file system, an attribute that hinted to researchers that S3D’s analysis routines would pair well with the ADIOS ecosystem.  After consultations with ORNL’s Qing Liu and Scott Klasky (a lead ADIOS developer) and Georgia Tech’s Jay Lofstead, Chen’s post-doc Ray Grout decided to integrate ADIOS as an alternative I/O mechanism for S3D.  ADIOS quickly surpassed the networking limitations imposed by the previous I/O stack.

PEOPLE:

Sandia’s Mike Heroux to Serve as SIAM Associate Editor

Mike Heroux of Sandia National Laboratories will start a 3-year term as an associate editor of the SIAM Journal on Scientific Computing starting January 1, 2010.  Heroux has also worked with SIAM to establish a career award and junior scientist award for the SIAM Activity Group on Supercomputing.  These awards will be given out at the SIAM Conference on Parallel Processing for Scientific Computing.

 
SNL’s Cynthia Phillips to Chair SIAM Supercomputing Group

Cynthia Phillips of Sandia National Laboratories has been elected the chair of SIAM’s Activity Group on Supercomputing (SIAG/SC).  She will serve January 1, 2010 to December 31, 2011. This activity group brings together computational scientists, computer scientists, and mathematicians with interests in all aspects of high-performance scientific computing: algorithms, architecture, programming environment, applications, and performance analysis. The chair is responsible for required SIAM administration, representing supercomputing within the SIAM SIAG system, and ensuring the SIAG is meeting its mission goals through conferences and new initiatives.

Contact:  Scott Collis, sscoll@sandia.gov
 
Bronis de Supinski Co-chairs Program of 18th International PACT Conference

The Parallel Architectures and Compilation Techniques conference (PACT) is a leading multi-disciplinary conference that brings together researchers from the hardware and software areas to present ground-breaking research related to parallel systems ranging across instruction-level parallelism, thread-level parallelism, multiprocessor parallelism and large scale systems.  Held September 12-16, 2009 in Raleigh, N.C., this year’s technical program included 35 papers, two keynotes and poster presentations including an ACM Student Research Competition.  The main conference program was augmented with two tutorials and four workshops.

 
ORNL’s Hauck Gives Invited Talks on New Transport Closures

Cory Hauck, a mathematician in the Computer Science and Mathematics Division at ORNL, presented recent work on "Optimization-based closures for radiation transport" at the Applied Mathematics/PDE Seminar at the University of Wisconsin and the Applied Mathematics Seminar at Michigan State University.  The basic idea of this work is to formulate kinetic moment closures as the solution to a particular optimization problem. Using the optimization-based approach, one can ensure that the resulting moment equations preserve certain fundamental features of the underlying kinetic description.

Dr. Hauck’s research was performed at Oak Ridge (and previously at Los Alamos) with support from the ASCR program and in collaboration with Ryan McClarren of Texas A&M.

 
NERSC Director Kathy Yelick Gives Invited Lecture at UCLA

Kathy Yelick, Director of the NERSC Center at Lawrence Berkeley National Laboratory and professor of computer science at UC Berkeley, was the December speaker in the Jon Postel Distinguished Lecture Series at the UCLA Computer Science Department.  Yelick discussed “Programming Models for Petascale to Exascale.” The quarterly series features “distinguished speakers from academia and industry,” according to the UCLA website.

 
NERSC’s Shane Canon Discusses Cosmic Computing at LISA’09

Shane Canon, leader of NERSC’s Data Systems Group, gave a plenary talk on “Cosmic Computing: Supporting the Science of the Planck Space Based Telescope” on November 5 at the 23rd Large Installation System Administration Conference (LISA’09)External link in Baltimore.  Links to the video, audio, and presentation slides for this talk can be found hereExternal link.  Canon presented an overview of the Planck project, including the motivation and mission, the collaboration, and the terrestrial resources supporting it; he described the data flow and network of computer resources in detail and discussed how the various systems are managed; and he highlighted some of the present and future challenges in managing a large-scale data system.

 

FACILITIES/INFRASTRUCTURE:

OLCF’s Jaguar Takes Number 1 on TOP500 List

An upgrade to a Cray XT5 high-performance computing system deployed at Oak Ridge National Laboratory (ORNL) has made the “Jaguar” supercomputer the world’s fastest.  Jaguar is the scientific research community’s most powerful computational tool for exploring solutions to some of today’s most difficult problems.  The upgrade, funded with $19.9 million under the Recovery Act, will enable scientific simulations that explore solutions to climate change and the development of new energy technologies.

To net the number-one spot on the TOP500 list of the world’s fastest supercomputers, Jaguar’s Cray XT5 component was upgraded this fall from four-core to six-core processors and ran a benchmark program called High-Performance Linpack (HPL) at a speed of 1.759 petaflop/s (quadrillion floating point operations, or calculations, per second).  The rankings were announced in Portland at the SC09 conference.

Other DOE systems in the Top 20 are:
  • # 2: Roadrunner, the IBM system at Los Alamos, at 1.042 petaflops
  • # 7: BlueGene L at Lawrence Livermore, 478 teraflops
  • # 8: BlueGene P at Argonne, 459 teraflops
  • # 10: Red Sky at Sandia, 424 teraflops
  • # 11, Dawn, a BlueGene P system at Lawrence Livermore, 416 teraflops
  • # 15: Franklin, a Cray XT4 at NERSC, 266 teraflops
  • # 16: Jaguar, a Cray XT4 system at Oak Ridge, 205 teraflops
  • # 17: Red Storm, a Cray XT3-XT4 system at Sandia, 204 teraflops
 
NERSC Takes Delivery of IBM Hardware for Magellan Cloud Computing System

Berkeley Lab has begun taking delivery of the IBM System iDataPlex system that will run the Lab’s program to explore how cloud computing can be used to advance scientific discovery. The program, dubbed Magellan, is funded by the American Recovery and Reinvestment Act through DOE and will be a testbed for NERSC users to explore the effectiveness of cloud computing for their particular research problems.

The Magellan Project will use IBM’s newest iDataPlex dx360 M2 server, which features double the memory and even higher power efficiency than previous versions.  Harnessing iDataPlex’s innovative half-depth design and liquid-cooled door, clients can lower cooling costs by as much as half and reduce floor space requirements by 30 percent.  Berkeley Lab’s iDataPlex system will have 5,760 processor cores and a theoretical peak speed of more than 60 teraflops and will be used to explore a set of possible software configurations for science clouds.  The first equipment was delivered Sunday, Nov. 29.

 
Argonne’s Cloud Computing Efforts Covered on “Chicago Tonight” News Program

Rich Samuels, a news correspondent with the “Chicago Tonight” news program on WTTW-Channel 11 visited Argonne National Laboratory for a segment on cloud computing.  “Cloud computing” is a model for on-demand access to computing resources – networks, servers, and software – that can be easily provisioned as needed over the Internet.  The concepts most visible adoption is in the commercial world, but Dr. Kate Keahey, a computer scientist in the Mathematics and Computer Science Division at Argonne, has been promoting its use in a far different area: science.  Keahey designed and developed the “workspace service” software that enables users to deploy virtual machines on remote resources.  This work was followed by tools enabling users to deploy virtual clusters sharing configuration and security context.  These tools form the open source Nimbus toolkit, which other researchers have extended to explore such questions as data privacy and storage management in the clouds. Keahey and ANL’s Ian Foster were interviewed as part of the news report, which was part of the 7 p.m. broadcast on Nov. 10. The segment can be viewed at the following link:

 
Scientific Grand Challenge Workshop Reports Published

Two major reports were recently published as part of the DOE Scientific Grand Challenges Workshop Series.  The workshop series is composed of nine collaborative technical meetings focusing on the grand challenges facing specific scientific domains and the role of extreme-scale computing in addressing those challenges.  To date, eight workshops have been held.  See http://extremecomputing.labworks.org/index.stmExternal link for more information.

PNNL staff worked with ASCR to produce Forefront Questions in Nuclear Science and the Role of Computing at the Extreme Scale and Challenges for Understanding the Quantum Universe and the Role of Computing at the Extreme Scale, both of which contain workshop panel reports and a high-level description of the recommendations common to all meetings.

The report, Challenges in Climate Change Science and the Role of Computing at the Extreme Scale, was published earlier in the year.

For more information, contact Moe Khaleel, moe.khaleel@pnl.gov
 
ASCR-Funded HOPSPACK Software Released by Sandia

Sandia National Laboratories announces the availability of HOPSPACK 2.0, an official release of the Hybrid Optimization Parallel Search PACKage.  HOPSPACK provides an open source C++ framework for solving derivative-free optimization problems.  The framework enables parallel operation using MPI and multithreading.  Multiple algorithms can be hybridized to run simultaneously, sharing a cache of computed objective and constraint function evaluations that eliminates duplicate work.  Functions are computed in parallel to be compatible with both synchronous and asynchronous algorithms.  HOPSPACK comes with a Generating Set Search algorithm that handles linear and nonlinear constraints.  The software is easily extended and is designed for developers to add new algorithms.  HOPSPACK is a successor to Sandia’s APPSPACK product.

HOPSPACK source code, and precompiled executables for Windows, Linux, and Macintosh are now available at http://csmr.ca.sandia.gov/hopspackExternal link.  The download site asks that users register an email address and provide a little information about how they might be using HOPSPACK.

 
CCA Enables Massive Subsurface Modeling on NERSC’s Cray XT4

The SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) has applied the Common Component Architecture (CCA) software tools and infrastructure, in collaboration with the GWACCAMOLE (GroundWAter CCA MOdeling Library and Extensions) SciDAC SAP, to achieve an unprecedented 17 million particle subsurface simulation using the “Franklin” Cray XT4 system at NERSC.

GWACCAMOLE has built a flexible component-based HPC framework for large-scale simulations of subsurface reactive flows, using a hybrid approach to combine different physical models into a single coherent simulation.  Existing parallel simulation tools are decomposed into components for use within the CCA framework and are extended to provide component interfaces between the different models.  A chemistry component was recently added to the framework and used to perform large scale simulations of contaminant transport in porous media using a simulation containing over 17 million SPH (Smoothed-Particle Hydrodynamics) particles on Franklin.

This feat was accomplished via ORNL’s efforts to port the entire CCA tool stack for execution on the Cray XT platform, including the “Babel” language interoperability system (used for combining C++ and Fortran90 components into a single GWACCAMOLE simulation), the “Ccaffeine” CCA component framework (with modifications to support “static” component objects as required on the Cray XT system), and the “Bocca” CCA project management and development environment.  Using ORNL’s extensions to these base CCA tools, in collaboration with researchers at Pacific Northwest, Lawrence Livermore and Sandia National Laboratories, the latest version of the GWACCAMOLE software components were simply copied from their development cluster at PNNL to be rebuilt and executed on the Cray XT, without any manual code modification.  In part, this portability was enabled by use of the Bocca development environment, which provides automated support for regeneration of component wrappers, using Babel, for compatibility with the local runtime execution environment; GWACCAMOLE had recently been adapted to utilize Bocca for its development work on the smaller computational cluster at PNNL.

 

OUTREACH & EDUCATION:

Consortium Tackles Challenge of Adapting Apps to Hybrid Multicore Systems

While hybrid multicore technologies will be a critical component in future high-end computing systems, most of today’s scientific applications will require a significant re-engineering effort to take advantage of the resources provided by these systems.  To address this challenge, three U.S. Department of Energy national laboratories, including the Berkeley Lab, and two leading universities have formed the Hybrid Multicore Consortium, or HMC, and held their first meeting at SC09.

The consortium leadership comprises Oak Ridge (ORNL), Lawrence Berkeley (LBNL) and Los Alamos (LANL) national laboratories, all of which are leaders in R&D in this arena, as well as providers of large-scale computational resources for the scientific community.  Other members are the Georgia Institute of Technology and the Swiss Federal Institute of Technology (ETH), two of the leading universities in the area of architecture and software research for hybrid multicore systems.

While multicore systems — using chips with multiple cores on each — are becoming more common, future designs are expected to soon feature hundreds of thousands or even millions of threads of parallelism.  At the same time, large vendors are designing hybrid systems which combine multicore processors with other processors and/or accelerators to improve overall performance.  Such systems will provide unprecedented computing power, but most of today’s leading scientific applications will have to be re-engineered to run on these hybrid architectures.

The goal of HMC is to address the migration of existing applications to accelerator-based systems and thereby maximize the investments in these systems.  Research areas expected to benefit from these advances include climate change, alternative energy sources, astrophysics, materials design, and environmental remediation, among others.

 
LBNL Co-Hosts Accelerator-Based Computing and Manycore Workshop

SLAC National Accelerator Laboratory, NERSC, LBNL, and UC Berkeley co-sponsored a workshop on “Accelerator-Based Computing and Manycore”External link during the first week of December at SLAC.  The focus of the workshop, which drew 60 participants was to introduce, explore and discuss the scope and challenges of harnessing the full potential of these novel architectures for high performance computing, especially in physics and astronomy applications.  Speakers and participants came from France, Germany, China, Taiwan and Japan, as well as the U.S.

 
ORNL-Mentored Students Named 2009 Siemens Regional Finalists

Working with mentors from ORNL, two Oak Ridge High School students were named regional finalists in the 2009 Siemens Competition.  Neil Shah and Katie Shpanskaya submitted their project entitled “Supercomputing Analytical Discovery of Plasma Instabilities in Fusion Energy Reactors.”

A record number of 1,348 projects were received this year for the Siemens Competition, an increase of 12 percent over 2008 figures.  The number of students submitting projects increased by 14 percent while more students than ever, 2,151, registered to enter.  This year, 318 students were named semifinalists along with 96 students being honored as regional finalists.  The 96 regional finalist whiz kids will be called to compete at one of six regional competitions held over three consecutive weekends in November.

Neil and Katie were mentored by Guru Kora (ORNL), Nagiza Samatova (ORNL/NCSU), Dr. CS Chang (NYU), and Dr. Anatoli Melechko (formerly at ORNL, now at NCSU materials science department).

For more information, select this linkExternal link

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:44 AM