2008

September

ASCR Monthly Computing News Report - September 2008



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.

In this issue...
 
 
 
 

RESEARCH NEWS:

Argonne Researchers Win Three Best Paper Awards
Members of the Mathematics and Computer Science Division at Argonne National Laboratory have received three best paper awards this past summer.
 
P. Balaji, W. Feng, H. Lin, J. Archuleta, S. Matsuoka, A. Warren, J. Setubal, E. Lusk, R. Thakur, I. Foster, D. S. Katz, S. Jha, K. Shinpaugh, S. Coghlan, and D. Reed won an Outstanding Paper Award for their paper "Distributed I/O with ParaMEDIC: Experiences with a Worldwide Supercomputer," which was presented at the International Supercomputing Conference held June 17–20, 2008, in Dresden, Germany.
 
P. Balaji, A. Chan, W. Gropp, R. Thakur and E. Lusk, received an Outstanding Paper Award at the Euro PVM/MPI Users' Group Conference in Dublin, Sept. 7-10, 2008, for their work on Non–Data–Communication Overheads in MPI: Analysis on Blue Gene/P.
 
Narayan Desai and Pavan Balaji won a Best Paper Award at the IEEE Cluster 2008 conference held at Tsukuba on Sept. 29-Oct. 1, 2008. Their paper, titled "Are nonblocking networks really needed for high–end computing workloads?" studied workload on clusters from Argonne and Lawrence Livermore national laboratories.
Contact: Gail Pieper, pieper@mcs.anl.gov
 
Researcher Breaks New Ground in Membrane Protein Research at the ALCF
Numerous biological processes are controlled by proteins in the cell membrane. Large–scale gating motions, occurring on a relatively slow time scale, are essential for the function of many important membrane proteins, such as transporters and channels. Voltage–activated ion channels are literally electric switches that are turned on by a change in the cellular potential. Malfunction of those channels can lead to cardiac arrhythmia and neurological pathologies.
 
A research team from Argonne, The University of Chicago, the University of Illinois at Chicago, and the University of Wisconsin used high–performance computing to break new ground in understanding how these membrane proteins work. Exploiting state–of–the–art developments in molecular dynamics and protein modeling, the team constructed models of voltage–gated potassium channels and ran them on the Blue Gene/P at the Argonne Leadership Computing Facility and the Cray XT at Oak Ridge National Laboratory, using DOE INCITE resources.
 
An important result of these simulations concerns the properties of the electric field responsible for the voltage activation. The calculations show that this electric field is indeed more intense than at other equivalent positions across the membrane far away from the protein. These results open up the possibility of better–designed therapeutic drugs, as well as the construction of artificial biomimetic nano–switches.
Contact: Benoit Roux, roux@uchicago.edu
 
Fifteen LBNL Researchers to Present Findings at AMR08
Eleven papers authored by 15 CRD researchers and others will be among the presentations at AMR08, the OASCR Applied Mathematics Principal Investigators Meeting, October 15-17 at Argonne National Laboratory. The researchers represent the Mathematics Group, the Center for Computational Sciences and Engineering, the Applied Numerical Algorithms Group, and the Scientific Computing Group. Here are their papers:
  • Solving Nonlinear Eigenvalue Problems in Electronic Structure Calculations, Chao Yang and Juan Meza.
  • Modeling of Fluctuations in Algorithm Refinement Methods, John Bell et al.
  • Low Mach Number Modeling of Type Ia Supernovae, Ann Almgren, John Bell, Andrew Nonaka, et al.
  • High–Resolution Adaptive Algorithms for Subsurface Flow, Ann Almgren, John Bell, George Pau, and Mike Lijewski.
  • High–Order, Finite Volume Discretization of Gyrokinetic Vlasov–Poisson Systems on Mapped Grids, Phil Colella, Dan Martin, et al.
  • Projection Formalisms for Dimensional Reduction and Multiscale Sampling, Alexandre Chorin.
  • Higher Order Compact Generalized Finite Difference Method, Jakub Kominiarczuk et al.
  • A Multiscale Simulation Technique for Optimization of Granular Mixing, Chris Rycroft.
  • Designing Algorithms for Complex Processes: Semiconductor Manufacturing and Industrial InkJet Printing, Jamie Sethian.
  • Time–Periodic Solutions of the Benjamin–Ono Equation, Jon Wilkening
  • Exact and Inexact BDDC algorithms for Saddle Point Problems, Xuemin Tu.
Contact: Linda Vu, LVu@lbl.gov
 
LLNL Researchers Release New Version of hypre Scalable Linear Solvers Library
Version 2.4.0b of the hypre library of scalable linear solver techniques was released on September 5, 2008. Key new capabilities include (1) support for all variable types, including edge, face and node–centered variables for block–structured grids settings, (2) two new Krylov solvers, Flexible GMRES and LGMRES, and (3) new solver options for the algebraic multigrid solver (BoomerAMG) and the Auxiliary space Maxwell Solver (AMS). The solvers in this library have been shown to scale to 131,000 processors of BG/L and have been used to reduce the total time to solution for many DOE applications from 2 to 25 times. For more information contact Rob Falgout (LLNL) or visit the Scalable Linear Solvers web site at https://computation.llnl.gov/casc/linear_solversExternal link.
 
Sandia Researchers Release New Compatible Discretization Capability within Trilinos
A multi–year research effort in advanced discretization methods for PDEs, sponsored by DOE's ASCR Applied Mathematics Research (AMR) Program, has provided theoretical foundations for the development of a new generation of interoperable tools for compatible discretization, i.e., discrete models that inherit key mathematical properties of the governing equations. These tools form the core of the Intrepid package that is being released as part of the NNSA–ASCR Trilinos Project. Intrepid utilizes a novel design that offers a common API for access to different discretization methods and is particularly well–suited for rapid development of application codes from components. Interoperability with other Trilinos packages further enhances this feature of Intrepid.
Contact: Pavel Bochev, pbboche@sandia.gov
 
Sandia Develops Next Generation Partitioning for Parallel Computing
Load balancing data among processors is vital to the scalability of parallel codes. Software tools such as Zoltan provide partitioning algorithms that compute parallel data distributions. For matrix computations, the data partitioning is typically done in a one–dimensional (1D) fashion, i.e., the matrix is partitioned by rows or columns, but not both. As part of research supported by the CSCAPES (Combinatorial Scientific Computing and Petascale Simulations) SciDAC institute, a team at Sandia has developed new two–dimensional sparse matrix partitioning algorithms that reduce the communication requirement substantially compared to the 1D approach. The team studied a particularly important kernel in scientific computing, sparse matrix–vector multiplication, which is the crux of many iterative solvers, and developed a new algorithm based on nested dissection (recursive substructuring).
 
Empirical experiments show that the method clearly outperforms 1D partitioning and is competitive (in quality) with other proposed 2D methods that have been deemed impractical since they are too expensive to compute. In contrast, the new method takes similar time to compute as traditional graph or hypergraph partitioning. On a test set of sparse matrices from diverse applications like finite element computations, circuit simulation, and text processing (informatics), researchers observed an average reduction in (predicted) communication volume of 15 percent (for symmetric matrices) but up to 97 percent reduction in extreme cases. The largest gains were for applications with highly irregular structures, like electrical circuit models, informatics and matrices from constrained optimization.
 
"We believe our data partitioning will be useful in a variety of algorithms, not just matrix–vector multiplication," said team member Erik Boman. "Our new partitioning algorithm is currently being implemented in the Isorropia package that provides partitioning and load–balancing services to Trilinos."
 
The planned 2009 release of Isorropia will additionally support the next–generation matrix partitioning methods and will be easily accessible to Trilinos users. This is joint work with Michael Wolf, U. of Illinois, Urbana–Champaign. See http://trilinos.sandia.gov for more information.
Contact: Erik Boman, egboman@sandia.gov
 
ORNL Middleware Accelerates I/O
Oak Ridge Leadership Computing Facility (OLCF) researcher Scott Klasky and Georgia Tech's Jay Lofstead and Karsten Schwan are the brains behind a new input/output (I/O) componentization library dubbed the Adapatable I/O System or, more popularly, ADIOS. The middleware's primary goal is to make the process of getting information in and out of a supercomputer easier and more effective. ADIOS's simple application programming interfaces (APIs) and external metadata (XML) file give researchers fast, portable performance, making the choice between efficient I/O and usable data a thing of the past.
In a recent fusion simulation on the OLCF's flagship Cray XT4, known as Jaguar, researchers using GTC immediately saw huge I/O benefits in a simulation that produced 60 terabytes of data. "With ADIOS, the I/O was vastly improved, consuming less than 3 percent of run time and allowing the researchers to write tens of terabytes of data smoothly without file system failure," said principal investigator Yong Xiao of the University of California–Irvine. ADIOS has also made huge strides with Chimera, an astrophysics code. During recent runs on Jaguar the necessary I/O time went from approximately 20 minutes to 1.4 seconds, an improvement of approximately 1,000 times. There are few foreseeable limits to ADIOS's potential. As it is expanded to additional platforms, simulating big science will become correspondingly simpler, allowing researchers to concentrate more on their results than the technical aspects of their simulations.
Contact: Jayson Hines, hinesjb@ornl.gov

PEOPLE:

LLNL's Steven Ashby Joins PNNL as Deputy Director for Science and Technology
Steven Ashby recently joined Pacific Northwest National Laboratory at its new Deputy Director for Science and Technology after spending more than twenty years at Lawrence Livermore National Laboratory, nearly all of it in the Computation Directorate.
 
As DDST, Ashby will work with PNNL’s scientific and technical staff to integrate and advance its S&T capabilities on behalf of scientific discovery, energy independence, environmental stewardship, and national security. “These missions, which are not dissimilar to LLNL’s, are important to me and I welcome the opportunity to help shape PNNL’s contributions to them,” Ashby said. He added, “I look forward to working with colleagues throughout the scientific community to forge new partnerships aimed at addressing many of the challenges facing our nation.” Ashby, who has been an advocate for computational science throughout his career, expects to remain active in advanced scientific computing programs, including SciDAC.
 
Ashby began his distinguished career at Livermore as a computational mathematician (right out of graduate school) and later led the ParFlow groundwater modeling project. In 1996 he founded the Center for Applied Scientific Computing. He later served as head of the Computing Applications and Research Department and, most recently, as Deputy Principal Associate Director for S&T.
Contact: Greg Koller, greg.koller@pnl.gov
 
Constantinescu Named Wilkinson Fellow in Scientific Computing at Argonne
Emil Constantinescu has received the 2008 Wilkinson Fellowship in Scientific Computing, one of the world's most prestigious postdoctoral fellowships in computational mathematics. He joined Argonne's Mathematics and Computer Science Division in late August, where he is working in the Laboratory for Advanced Numerical Simulations.  Constantinescu received his doctoral degree in numerical analysis and scientific computing from Virginia Tech in 2008. For his thesis, he created multirate discretization methods for problems involving time–dependent partial differential equations. His methods have been applied successfully to a large–scale, state–of–the–art atmospheric model, one of the first dynamically adaptive grid studies in air quality resolution. The Wilkinson Fellowship is intended to encourage young scientists actively engaged in state–of–the–art research in scientific computing.
Contact: Gail Pieper, pieper@mcs.anl.gov
 
Maciej Haranczyk Joins LBNL Scientific Computing Group as a Seaborg Fellow
Maciej Haranczyk joined Berkeley Lab as a 2008 Seaborg Fellow on September 3. Haranczyk was one of three candidates who were offered the Lab–wide distinguished postdoctoral fellowship in 2008. The fellowship is named after the late Dr. Glenn T. Seaborg. Haranczyk will be affiliated with the Scientific Computing Group in the Computational Research Division. He recently received a doctoral degree in chemistry from University of Gdansk in Poland. During his graduate studies, he received numerous research fellowships to collaborate with researchers at Pacific Northwest National Laboratory, University of Southern California, University of Sheffield, and others. Haranczyk's research interests include quantum chemistry, chemoinformatics, and combinatorial chemistry. At Berkeley Lab, he will continue to pursue his research in computational chemistry and explore opportunities collaborating with researchers across the lab.
Contact: Linda Vu, LVu@lbl.gov
 
Andrew Taube Is New John von Neumann Fellow at Sandia
Andrew Taube recently joined Sandia as ASCR's 2009 John von Neumann Fellow in Computational Science. Andrew is an expert in the areas of computational materials and quantum information/quantum computing. Taube obtained double B.S. degrees in mathematics and chemistry from Duke University and recently received his Ph.D. in Chemical Physics from the University of Florida as part of the Quantum Theory Project under Prof. Rodney Bartlett. Taube's dissertation is entitled Generalizations of the Coupled–cluster Ansatz: Examining the Boundaries of Coupled–cluster Theory. He joins Sandia as part of the Multiscale Dynamic Materials Modeling Department (1415) under the mentorship of ASCR–AMR researcher, Rick Muller.
Contact: Scott Collis,sscoll@sandia.gov
 
OLCF Staffers Lead Technology Development Efforts
Two members of the OLCF Technology Integration Group are moving up to take formal leadership roles as the center pushes the limits of computing technology. Galen Shipman will take over as group leader, stepping in for Shane Canon, who recently left the group to return to Lawrence Berkeley National Laboratory. Rich Graham will step in as leader of the newly created Application Productivity Software Group, working to develop technologies that allow researchers to take full advantage of the center's leadership computers. According to OLCF Director Jim Hack, the two are now in a position to make the most of their unique talents. "Each one of these guys is in a role that draws on their professional strengths, the most ideal matches for what we need in the way of leadership for those activities," Hack said.
Shipman came to the OLCF in September 2007 from Los Alamos National Laboratory (LANL), where he was acting team leader in the Network Research Group within the laboratory's Advanced Computing Laboratory (ACL). As leader of the Technology Integration Group, he will lead efforts to bring the next–generation Lustre file system, known as Spider, to the center. Graham also came to the OLCF from LANL, where he was acting group leader of the ACL. Among his responsibilities, Graham is coordinating a community–wide effort to update the decade–old Message Passing Interface standard for highly parallel computing.
Contact: Jayson Hines,hinesjb@ornl.gov
 

FACILITIES/INFRASTRUCTURE:

DOE to Provide Supercomputing Time to Run NOAA's Climate Change Models
The U.S. Department of Energy's (DOE) Office of Science will make available more than 10 million hours of computing time for the U.S. Commerce Department's National Oceanic and Atmospheric Administration (NOAA) to explore advanced climate change models at three of DOE's national laboratories as part of a three–year memorandum of understanding on collaborative climate research signed today by the two agencies.
 
NOAA will work with climate change models as well as perform near real–time high–impact (non–production) weather prediction research using computing time on DOE Office of Science resources including two of the world's top five most powerful computers – the Argonne National Laboratory's 557 teraflop/s IBM Blue Gene/P and Oak Ridge National Laboratory's 263 teraflop/s Cray XT4. NOAA researchers will also receive time on supercomputers at DOE's National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory.
 
ESnet Ready to Deliver LHC Data to U.S. Researchers
Approaching the speed of light, millions of protons will collide per second when the Large Hadron Collider (LHC) comes online this fall. The experiment will generate more data than the international scientific community has ever tried to manage. Scientists suspect the outcome of these "subatomic smashups" will provide valuable insights into the origins of matter and dark energy in the Universe. As thousands of researchers across the globe anxiously await the results of this experiment, getting the massive amounts of data to them is no insignificant task. Fortunately, network engineers at the U.S. Department of Energy's (DOE) Energy Sciences Network (ESnet) foresaw this data challenge years ago and developed ESnet4, a new large–scale science data transport network with enough bandwidth to transport multiple streams of 10 gigabits of information per second.
 
The European Center for Nuclear Research (CERN), which manages the LHC, will initially collect the experiment's data.  The information will then migrate across the Atlantic Ocean via fiber optics on a network called USLHCnet, which is managed by researchers at the California Institute of Technology in Pasadena, Calif. Like a virtual Ellis Island, an ESnet hub on 8th Street in Manhattan will be the US entry point for LHC data. From there, ESnet will deliver data from the LHC's ATLAS detector to Brookhaven National Laboratory in Upton, N.Y., where it will be processed and stored. Meanwhile, data from the LHC's CMS detector will go to the Fermi National Accelerator Laboratory in Batavia, Ill., for processing and storage.
 
Researchers at universities and DOE laboratories across the country will then be able to connect to these databases through ESnet4, the DOE's next–generation scientific network. ESnet is funded by the DOE and managed by Lawrence Berkeley National Laboratory.
Contact: Linda Vu, LVu@lbl.gov
 
LHC's First Beam Highlights Fermilab's Grid Capabilities
First beam for Fermilab on June 30, 1971, and for the Large Hadron Collider on September 10, 2008, had a certain similarity. In both cases, tiny blips, announcing success, appeared on a screen. See the "then and now" images in the iSGTW "Image of the Week" article at http://www.isgtw.org/?pid=1001380External link.
 
When it comes to distributed data analysis to far flung collaborators, thanks to grid computing infrastructure, a lot has changed. In 1971, the only way to accomplish such analysis was to grab a box of data tapes and spend a long time on a plane. In 2008, the grid computing infrastructure in place allowed CMS calorimeter experts 7000 km away at Fermilab to see how their detector components responded within minutes of beam traversal. Raw data files were transferred from CERN to Fermilab within an hour, and reconstructed data files arrived shortly after they were produced.
 
Of further note, the offline data operations team for first beam was at CERN, but thanks to the grid, by the afternoon Geneva time, responsibility had rotated to the team at the Fermilab Remote Operations Center.
Contact: David J. Ritchie, ritchie@fnal.gov
 
Jaguar Poised to Pounce on Petascale
The Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL) has begun receiving the first cabinets of its upcoming Cray XT5 petascale upgrade to its Jaguar supercomputer, a system that will soon be among the most powerful in the world. The upgraded Jaguar will feature liquid–cooled cabinets and quad–core AMD OpteronTM 2.3 gigahertz processors and will be housed in ORNL's Computational Sciences Building. The cabinets for the new machine will use a Cray–designed R134–a refrigerant and chilled water cooling system–to sustain the estimated peak power demand of more than 6.5 MW. Introducing the petascale to open science has required substantial upgrades to the OLCF infrastructure.
 
While the current terascale system is a computational giant in its own right, the new Jaguar will be nearly four times as powerful, enabling a new era in simulation science in numerous scientific arenas such as climate modeling, astrophysics, and fusion energy. The machine will be the only open science petascale system in the world when it comes online in 2009.
Contact: Jayson Hines,hinesjb@ornl.gov
 
NERSC Releases Software Test for Its Next Supercomputer
The National Energy Research Scientific Computing Center (NERSC) is looking for a new supercomputer, but NERSC staff want to know that their new supercomputer can reliably handle a diverse scientific workload, so they've developed the Sustained System Performance (SSP) Benchmarks, a comprehensive test for any system they consider. The benchmarks were released in conjunction with the NERSC–6 request for proposals on September 4. This version of SSP marks the first time that both vendors and the performance research community can easily access all applications and test cases.
 
Instead of peak performance estimates, that is, the number of teraflop/s that could potentially be performed, NERSC scientists and engineers are concerned with the actual number of teraflop/s that the system will achieve in tackling a scientific problem. NERSC staff refer to this concept as sustained performance, and measure it using the SSP. The SSP suite consists of seven applications and associated inputs, which span a wide range of science disciplines, algorithms, concurrencies and scaling methods. NERSC General Manager Bill Kramer notes that this benchmark provides a fair way to compare systems that are introduced with different time frames and technologies. The test also provides a powerful method to assess sustained price/performance for the systems under consideration. The new SSP suite can be downloaded from http://www.nersc.gov/projects/ssp.phpExternal link
Contact: Linda Vu, LVu@lbl.gov
 
ORNL's Smoky Stepping Stone to Big Science
The Cray XT4 known as Jaguar may be the Oak Ridge Leadership Computing Facility' (OLCF's) flagship system, but the big cat gets plenty of help. The OLCF's development resource known as Smoky (named after the mascot for the University of Tennessee) was purchased to offload debugging and development work from Jaguar to a smaller yet capable system and is now the primary platform used by researchers to port and scale their codes to Jaguar, the production system. In its own right, Smoky is very capable of producing big science solo, but its relationship works well with Jaguar primarily because the two programming environments closely mirror one another.
 
"Without Smoky, development would be seriously slowed down, as everyone would have to wait in line on Jaguar," said OLCF staff member Bobby Whitten. Smoky is available to all Innovative and Novel Computational Impact on Theory and Experiment (INCITE) projects, but access is limited to a specific number of people on each team, namely those involved in serious scientific application development. An 11.7 teraflop/s system, Smoky consists of 320 quad–core AMD processors and 2.5 terabytes of memory. Smoky also features a gigabit Ethernet network with InfiniBand interconnect and access to Spider, the OLCF's Lustre–based file system.
Contact: Jayson Hines,hinesjb@ornl.gov
 
ORNL Storage System Tops 3 Petabytes
Oak Ridge National Laboratory's High–Performance Storage System (HPSS) reached a significant milestone recently, surpassing three petabytes of storage. That is double the system's storage this time last year. According to Stan White of the Oak Ridge Leadership Computing Facility (OLCF), the system's storage is on track to double again in the coming year. The decade–old HPSS is ORNL's principal archival storage system, holding simulation data for both of ORNL's major supercomputing centers: the OLCF and the National Institute for Computational Sciences (NICS). The storage is located in four silos in two locations. White noted that the system is due to receive 16 tape drives that can read tapes holding as much as 1 terabyte of data uncompressed.
Contact: Jayson Hines,hinesjb@ornl.gov
 

OUTREACH:

LBNL Hosts Ninth ACTS Collection Workshop
The Ninth DOE ACTS Collection Workshop was held at Berkeley Lab's Oakland Scientific Facility on August 19-22, and despite the long hours (8 a.m. to 6 p.m. except for the last day), the participants worked diligently through the tutorials and hands–on sessions, with one graduate student even working after hours with a presenter to obtain profiling data from a large code development effort. This year's participants included 20 graduate students (two from the DOE Computational Science Graduate Fellow program), 10 postdoctoral fellows, three university professors, and seven staff researchers, as well as drop–ins from NERSC, LBNL, and Sandia National Laboratories.
Contact: Linda Vu, LVu@lbl.gov
 
LLNL hosts a workshop on enabling technologies for nuclear energy applications
Lori Diachin (LLNL) and Dave Nowak (ANL) co–chaired a workshop held at Lawrence Livermore National Laboratory on Sept 17–18, 2008 that focused on the enabling technology needs of nuclear energy applications. The workshop was commissioned by Alex Larzelere, the NEAMS (Nuclear Energy Advanced Modeling and Simulation) program manager in DOE's Nuclear Energy program. Over 40 participants from DOE laboratories and universities worked together to understand the requirements of nuclear reactor, fuels, separations and waste forms simulations in five focus areas:
  • Model Setup Tools (e.g., complex geometry, mesh generation)
  • Runtime Algorithms and Libraries (e.g., linear/non linear solvers, AMR)
  • Results Analysis (e.g., visualization, data management)
  • Modeling and Simulation frameworks (e.g., component based approaches)
  • Computing Requirements (e.g., analysis of future architectures, simulation requirements (flops), programming models, code development tools).
Throughout the workshop many technologies developed by SciDAC, ASC and other DOE programs were discussed and identified has having strong potential for impacting the next generation of nuclear energy simulation codes. More information can be found at https://computation.llnl.gov/car/workshops/neamsExternal link
 
HPC Users Meet to Share Petascale Science Initiatives
"Delivering Science on Petascale Computers" was the theme of the 4th annual Fall Creek Falls Conference, held Sept. 7-10 at Montgomery Bell State Park west of Nashville. The conference brings together experts from universities, national laboratories, and industry to address the challenges of modeling and simulation, as high performance computing has moved from terascale to petascale. More than 100 participants at this year's conference made presentations and participated in panels organized around climate change, including coupled high resolution modeling of the Earth System Project, electrical energy storage (supercapacitors), the biosciences, data analytics, quantum Monte Carlo calculations at the petascale, applications in biophysics, theoretical and computational nanoscience, and computational astrophysics.
 
The Fall Creek Falls Conference, hosted by the Computing and Computational Sciences Directorate (CCSD) at Oak Ridge National Laboratory, was first launched in 2004 as an opportunity for experts from its Oak Ridge Leadership Computing Facility (OLCF), university partners, OLCF researcher–users, other centers, industry and government to discuss how best to use the new computing resources provided by the DOE Office of Science.
Contact: Jayson Hines, hinesjb@ornl.gov
 
ALCF Staff to Host SC08 Birds–of–a–Feather Sessions
ALCF staff members will host three Birds–of–a–Feather (BOF) sessions at SC08. On November 18, Susan Coghlan, Associate Division Director, Argonne Leadership Computing Facility, will lead a BOF on "Blue Gene System Management Community."  The session will provide an opportunity for Blue Gene system administrators to share information and discuss problems. A panel of experts from Argonne National Laboratory, Juelich Research Centre, Lawrence Livermore National Laboratory, IBM, and Stony Brook University will discuss their configurations and present their top five issues. Members of the audience will have the opportunity to ask questions and share their own top issues and tools.
 
Also on November 18, Kalyan Kumaran, ALCF Manager, Performance Engineering and Data Analytics, will present a BOF on "SPEC MPI2007: A Benchmark to Measure MPI Application Performance."  The High Performance Group of the Standard Performance Evaluation Corporation has recently released a benchmark suite based on a variety of MPI applications from several different areas. Ported and tested on a variety of platforms, the suite comes with run rules and performance metrics to help ensure fair and objective benchmarking and comparison of high– performance computing systems and software. The final suite is comprised of 13 technical applications from different domains. The talk will describe, in detail, the benchmark creation process. A roadmap of future additions to the suite also will be presented.
 
On November 19, Katherine Riley, ALCF Team Lead, will facilitate a session on "Petascale Computing Experiences on Blue Gene/P."  Kalyan Kumaran, the ALCF Manager, Performance Engineering and Data Analytics, and Paul Messina, the ALCF Director of Science, will also speak. With well over 100 applications ported to the Blue Gene/P at the ALCF, many lessons have been learned. Load balancing at this scale, single core performance, memory limitations, and parallel scalability are some of the challenges the application groups have faced. Groups are also using Blue Gene/P as the stepping–stone to explore new programming models for the next generation of supercomputers. Anyone interested in the architecture or those with experiences with the P architecture are invited to make presentations and join a discussion on Blue Gene/P science, algorithms, and performance issues. For more details, see the ALCF events calendar at:
 
JGI and BDMTC Update IMG, IMG/M, and Launch IMG Educational Website
The U.S. Department of Energy Joint Genome Institute (DOE JGI) has extended the capabilities of the Integrated Microbial Genomes (IMG) data management system, has updated the content of the IMG/M metagenome data management and analysis system, and has launched its educational companion site, IMG/EDU. IMG, IMG/M, and IMG/EDU are the result of a collaboration between the DOE JGI and Berkeley Lab's Biological Data Management and Technology Center (BDMTC). Read more at:

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:49 AM