2009

October

ASCR Monthly Computing News Report - October 2009



In this issue...
 
 
 
 
 

SPECIAL SECTION: DOE Labs to Share Expertise at SC09

Argonne Staff Deeply Engaged in SC09 Activities

Argonne National Laboratory staff will be presenting a wide variety of papers, workshops, Birds-of-a-Feather (BOF) sessions, tutorials, and posters at SC09 on November 14-20 in Portland, ORExternal link.  Papers will cover such topics as Terascale Data Organization for Discovering Multivariate Climatic Trends, a Configurable Algorithm for Parallel Image-Compositing Applications, and I/O Performance Challenges at Leadership Scale.  Workshops will focus on Many-Task Computing on Grids and Supercomputers (MTAGS), Grid Computing Environments (GCE) 2009, and Using Clouds for Parallel Computations in Systems Biology.  BOF sessions will promote stimulating discussions on Python for High Performance and Scientific Computing; MPICH: A High-Performance, Open-Source MPI Implementation; HPC Saving the Planet, One Ton of CO2 at a Time; International Exascale Software Program; and CIFTS: A Coordinated Infrastructure for Fault-Tolerant Systems.

Tutorials will educate attendees on Python for High Performance and Scientific Computing, Parallel I/O in Practice, Configuring and Deploying GridFTP for Managing Data Movement in Grid/HPC Environments, Designing High-End Computing Systems with InfiniBand and 10-Gigabit Ethernet, InfiniBand and 10-Gigabit Ethernet for Dummies, Application Supercomputing and the Many-Core Paradigm Shift, and Advanced MPI.  In addition to printed posters that will be presented, an array of electronic posters and research highlights will be displayed at the Argonne exhibit.

 
Argonne Researchers to Compete in SC09 Bandwidth Challenge

The SC09 Bandwidth Challenge is an annual competition for leading-edge network applications developed by teams of researchers from around the globe.  Raj Kettimuthu, a senior software developer at Argonne National Laboratory, is coordinating a team of researchers from Argonne, Lawrence Berkeley, and Lawrence Livermore national laboratories, the University of Utah, and DataDirect Networks to demonstrate an application of the Earth System Grid.  For this bandwidth challenge, 10 TB of climate data will be transferred from NERSC across ESnet’s Science Data Network using OSCARS, to the SC09 showroom floor in two hours.  A computer at SC09 will then visualize this data with VisIt in real time at SC09 and project the final simulation on a wall.

 
PNNL Contributions to SC09

PNNL researchers will have full participation at SC09.  Several researchers will lead technical panel and poster sessions; and the Birds-of-a-Feather session titled “Extending Global Arrays to Future Architectures" will be led by PNNL"s Bruce Palmer.

Sriram Krishnamoorthy and the late Jarek Nieplocha, both of PNNL, are co-authors of the paper "Scalable Work Stealing" to be presented during the Dynamic Task Scheduling session on Thursday, Nov. 19.
Read the abstractExternal link

Steve Elbert of PNNL and William Tschudi of LBNL will be among the members of a panel discussion on "Energy Efficient Data Centers for HPC, How Lean and Green Do We Need to Be?" to be held on Thursday, Nov. 19.
Read the abstractExternal link

Also, several PNNL researchers will be giving demonstrations, including cyber security given by John Johnson, and the Center for Adaptive Supercomputing-Multithreaded Architectures given by John Feo.  Other demos are on the Electricity Infrastructure and Operations Center by Henry Huang, visual analytics by Bill Pike, carbon sequestration with Tim Scheibe, molecular codes/NWChem with Bert DeJong, and a presentation by Kevin Regimbal on "How to Become an EMSL User".

Contact:  Moe Khaleel, moe.khaleel@pnl.gov
 
Berkeley Lab Staff to Present Papers, Participate in Tutorials, Panels, BoFs

When SC09 convenes Nov. 14–20 in Portland, LBNL researchers will participate in two tutorials, present four technical papers, join in one panel discussion, give two Masterworks talks and lead two Birds-of-a-Feather sessions.  For a complete look at Berkeley Lab activities during SC09, go to: http://www.lbl.gov/CS/sc09.htmlExternal link

Tutorials
Hank Childs of Lawrence Berkeley National Laboratory and Sean Ahern of Oak Ridge National Laboratory will present “VisIt — Visualization and Analysis for Very Large Data Sets,” a tutorial on VisIt, an open source visualization and analysis tool designed for processing large data.  The half-day session will be held on Sunday, Nov. 15.
Read the abstractExternal link

Alice Koniges of Berkeley Lab/NERSC, along with Rusty Lusk of Argonne National Laboratory and three others, will present “Application Supercomputing and the Many-Core Paradigm Shift,” a tutorial giving an overview of supercomputing application development with an emphasis on the many-core paradigm shift and programming languages.  The full-day session will be held on Sunday, Nov. 15.
Read the abstract.External link

Technical Papers
David Pugmire and Sean Ahern of ORNL, Hank Childs and Gunther Weber of LBNL, and Christoph Garth of UC Davis will present their paper “Scalable Computation of Streamlines on Very Large Datasets” during the Large-Scale Applications session on Tuesday, Nov. 17.
Read the abstract.External link

Marghoob Mohiyuddin, James Demmel and Kathy Yelick of LBNL/UC Berkeley, and Mark Hoemmen of UC Berkeley will present a paper on “Minimizing Communication in Sparse Matrix Solvers” as part of the Sparse Matrix Computation session on Tuesday, Nov. 17.
Read the abstract.External link

Kamesh Madduri, Samuel Williams, Leonid Oliker, John Shalf, Erich Strohmaier, and Katherine Yelick of LBNL, and Stephane Ethier of PPPL will present their paper “Memory-Efficient Optimization of Gyrokinetic Particle-to-Grid Interpolation for Multicore Processors” during the Particle Methods session on Tuesday, Nov. 17.
Read the abstract.External link

Marghoob Mohiyuddin of LBNL/UC Berkeley, Mark Murphy and John Wawrzynek of UC Berkeley, and Leonid Oliker, John Shalf, and Samuel Williams of LBNL will present the paper “A Design Methodology for Domain-Optimized Power-Efficient Supercomputing” during the Future HPC Architectures session on Thursday, Nov. 19.
Read the abstract.External link

Panel Discussion
William Tschudi of LBNL and Steve Elbert of PNNL will be among the members of a panel discussion on “Energy Efficient Data Centers for HPC, How Lean and Green Do We Need to Be?” to be held on Thursday, Nov. 19.
Read the abstract.External link

Masterworks
Teresa Head-Gordon of LBNL/UC Berkeley will discuss “Big Science and Computing Opportunities: Molecular Theory, Models and Simulation” during the Masterworks Session on Multi-Scale Simulations in Bioscience to be held Wednesday, Nov. 18.
Read the abstract.External link

Michael Wehner of LBNL’s Computational Research Division will talk about “Green Flash: Exascale Computing for Ultra-High Resolution Climate Modeling” as part of the Masterworks Session on Toward Exascale Climate Modeling held Thursday, Nov. 19.
Read the abstract.External link

Birds of a Feather Sessions
William Tschudi of LBNL will lead a BoF for the Energy Efficient High Performance Computing Working Group on Thursday, Nov. 19. 
Read more about the session.External link

Jon Dugan of LBNL/ESnet will lead a BoF on Network Measurement on Wednesday, Nov. 18.  Read more about the session.External link

 
ESnet to Provide Key Support for Two Teams in SC09 Bandwidth Challenge

As scientists in a wide variety of disciplines increasingly rely on supercomputers and collaboration with colleagues around the world to advance their research, managing and sharing the mountain of data generated by their investigations in a timely manner is becoming increasingly difficult.  Inspired by this situation, the SC09 Bandwidth Challenge provides a competition in which participants are pushed to find creative techniques and technologies for transmitting as much data as possible in a fixed period of time.

Two teams in this year's competition will be transporting terabytes of data across the Department of Energy's Energy Sciences Network (ESnet) over a period of several hours.  To ensure that the data arrives within the challenge timeframe, the teams used ESnet’s On-Demand Secure Circuit and Advance Reservation System (OSCARS) to reserve bandwidth on its Science Data Network (SDN), which is optimized to handle massive datasets.

A team led by Rajkumar Kettimuthu, a staff scientist at Argonne National Laboratory, will attempt to stream 10 terabytes of climate research data from three DOE computing facilities - the Argonne Leadership Computing Facility (ALCF), the National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory, and Lawrence Livermore National Laboratory - to the SC09 exhibition at the Oregon Convention Center in Portland - in a period of two-hours.  Once the data arrives at the University of Utah’s SC09 booth, it will be stored on disks, processed in real time with Climate Data Analysis Tools and the Visualization Streams for Ultimate Scalability visualization tool, and then publicly displayed along with graphs depicting the demo’s transfer rates.

Another team led by Harvey Newman of the California Institute of Technology (Caltech) will demonstrate storage-to-storage physics dataset transfers of up to 100 Gbps (gigabits per second) sustained in one direction, and well above 100 Gbps in total moving bidirectionally, using a total of 15 10-Gbps drops at the Caltech booth.  The demonstration will make use of Hadoop and dCache storage systems in Portland and at partner institutes in Michigan, Florida, San Diego, Brazil and Korea; as well as the SLAC, Fermi, and Brookhaven National laboratories; and the European Organization for Nuclear Research (CERN).

 

RESEARCH NEWS:

Researchers Compare Options to Advance DOE Bioenergy Goals

Economic feasibility and sustainability of lignocellulosic ethanol production requires the development of robust microorganisms that can efficiently degrade and convert plant biomass to ethanol.  The anaerobic thermophilic bacterium Clostridium thermocellum is a candidate microorganism as it is capable of hydrolyzing cellulose and fermenting the hydrolysis products to ethanol and other metabolites.  C. thermocellum achieves efficient cellulose hydrolysis using multiprotein extracellular enzymatic complexes, termed cellulosomes.  Researchers at ORNL, North Carolina State University and the BioEnergy Science Center (BESC), supported by the Biological and Environmental Research program, have measured relative changes in levels of cellulosomal subunit proteins (per CipA scaffoldin basis) when C. thermocellum was grown on a variety of carbon sources [switchgrass, cellobiose, and others].  To date, this study provides the most comprehensive comparison of cellulosomal compositional changes in C. thermocellum in response to different carbon sources.  Such studies are vital to engineering a strain that is best suited to grow on specific substrates of interest and provide the building blocks for constructing designer cellulosomes with tailored enzyme composition for industrial ethanol production.  This work was published in PLoS One. 2009;4(4):e5271. Epub 2009 Apr 22.PMID: 19384422

Contact:  Jayson Hines, hinesjb@ornl.gov
 
Study Finds Greater Uncertainty in Climate Predictions

Climate projections may be more uncertain than we had understood, according to research conducted at the Oak Ridge Leadership Computing Facility (OLCF).  As a result, reliance on carbon-heavy fossil fuels may bring on even greater warming and more heat waves in the coming century than predicted by the Intergovernmental Panel on Climate Change.

A research team led by Oak Ridge National Laboratory's (ORNL's) Auroop Ganguly used climate data from 2000-2007 to calculate the uncertainty in the leading Community Climate System Model version 3, developed by the National Center for Atmospheric Research (NCAR). Looking specifically at three scenarios that varied in their assumptions of economic growth and fossil fuel reliance, the team then performed climate simulations for the decades 2000-2009, 2045-2055 and 2090-2099.  While climate scientists have to this point understood uncertainty as the difference between scenarios, this research suggests an even greater variance when the increased uncertainty is taken into account.

The study was published in the Proceedings of the National Academy of Sciences.  Co-authors include Marcia Branstetter, John Drake, David Erickson, Esther Parish, Nagendra Singh, and Karsten Steinhaeuser of ORNL, and Lawrence Buja of NCAR.  Funding for the work was provided by ORNL’s new cross-cutting initiative called Understanding Climate Change Impacts through the Laboratory Directed Research and Development program.  The paper can be accessed electronically at this link:.

 
LBNL Staff Present Research Results at SIAM Conference on Applied Linear Algebra

Staff members from Berkeley Lab’s Computational Research Division contributed in a number of ways to the SIAM Conference on Applied Linear AlgebraExternal link held October 26–29 in Monterey, Calif.  Among the LBNL contributions were those by:

  • Esmond Ng, in addition to chairing the organizing committee, organized a session on “Matrix Computations in Industrial Applications” and presented “Role of Numerical Linear Algebra in DOE.” He also participated in the panel discussion “Forward Looking Session: Role of Linear Algebra in Industrial Applications.”
  • Chao Yang co-chaired a minisymposium on “Nonlinear Eigenvalue Problems,” organized two sessions on “Nonlinear Eigenvalue Problems,” and presented “Solving Nonlinear Eigenvalue Problems in Electronic Structure Calculations.”
  • Chao Yang and Juan Meza co-authored “On the Convergence of the Self Consistent Field Iteration for a Class of Nonlinear Eigenvalue Problems” and “Minimization of the Kohn-Sham Energy with a Localized, Projected Search Direction."
  • Jim Demmel co-authored “Minimizing Communication in Linear Algebra,” “Variable Projection Methods for Separable Nonlinear Least Squares Learning,” and “CPU-GPU Hybrid Eigensolvers for Symmetric Eigenproblems.”
  • Ming Gu co-organized two sessions on “Fast Approximate Algorithms for Structured Matrices” and one on “Fast Algorithms for Structured Matrices.” He presented “Structured Matrix Computations: Recent Advances and Future Work” and co-authored “On the Schur Complements of the Discretized Laplacian.”
  • Sherry Li presented “Use of Semi-Separable Approximate Factorization and Direction-Preserving for Constructing Effective Preconditioners,” co-authored by Ming Gu.
 
ComPASS Simulations Used to Improve Operational Parameters for Fermilab Tevatron

Using resources at the Argonne Leadership Computing Facility and the National Energy Research Scientific Computing Center, Panagiotis Spentzouris to create fully 3D multi-beam-dynamics-process simulations of the Fermilab Tevatron to support a change of chromaticity at the Tevatron, thus leading to reduced losses.

The essential features of the simulation include a fully 3D strong-strong beam-beam particle-in-cell Poisson solver, interactions among multiple bunches and both head-on and long-range beam-beam collisions, coupled linear optics and helical trajectory consistent with beam orbit measurements, chromaticity and resistive wall impedance.  Individual physical processes were validated against measured data where possible, and analytic calculations elsewhere.  The simulation was used to study the effects of increasing beam intensity with single and multiple bunches, and study the combined effect of long-range beam-beam interactions and transverse impedance at the Tevatron.  The studies were used to support the change of chromaticity at the Tevatron by a factor of two at the squeeze, by demonstrating that even the reduced beam-beam effect from long-range collisions may provide enough Landau damping to prevent the development of head-tail instability.  The new operation parameters lead to reduced losses.  Validation studies and results from Fermilab Tevatron simulations including multiple beam dynamics effects were submitted to Phys. Rev. ST Accel. Beams.

For more information, see: http://preview.tinyurl.com/CompassSimExternal link
 
Sandians to Publish Paper on Novel Data Partitioning Scheme

The SIAM Journal on Scientific Computing has accepted for publication a paper by Sandia researchers titled "Spectral Representation and Reduced Order Modeling of the Dynamics of Stochastic Reaction Networks via Adaptive Data Partitioning." The paper, by Khachik Sargsyan, Bert Debusschere, Habib Najm, and Olivier Le Maître, details the development of a novel data partitioning scheme to facilitate the representation of stochastic processes with spectral polynomial chaos representations developed under the ASCR Applied Math program.

For more information, contact Bert Debusschere, bjdebus@sandia.gov
 
PNNL Project Seeks New Ways to Analyze Petascale, Noisy, Incomplete Climate Data

PNNL researchers recently won an ASCR-funded project aimed at analyzing petascale, noisy data from global climate model and available experimental data to predict global warming scenarios.  Researcher Guang Lin received funding to develop advanced stochastic nonlinear data-reduction methods.  This framework will be applied to climate science, where effective petascale data-reduction techniques are critical to analyzing the huge amount of numerical and experimental data generated from petaflop computers and high-throughput instruments.  This approach will greatly improve the capability to deal with variability or uncertainty inside the petascale, noisy, incomplete data.  If successful, the research will have a revolutionary impact on how to analyze petascale, noisy, incomplete data in complex systems and ultimately lead to better future prediction and decision-making.

For more information, contact: Guang Lin quang.lin@pnl.gov
 
LBNL's B-ISICLES Project to Improve Accuracy of Ice Sheet Models

One of the most-cited examples of global climate change is retreating ice sheets in Antarctica and Greenland.  But the detail of how fast they are melting is a mystery that may be solved with a new generation of computer simulations.

Current computer models can only provide very crude representations of important physical processes like glacier surges, iceberg calving and grounding-line migration.  But this is all about to change now that researchers from the Lawrence Berkeley National Laboratory’s (Berkeley Lab) Computational Research Division (CRD) and Los Alamos National Laboratory are collaborating to develop parallel adaptive mesh refinement techniques for the Community Ice Sheet Modeling code known as GLIMMER-CISM.  These algorithms will allow researchers to model points of interest, like the retreating edges of ice sheets, at unprecedented resolution.  With more accurate models, they will be better able to make more accurate predictions about how ice sheet melting is contributing to other phenomena like the rise in global sea level.

The Department of Energy’s Office of Advanced Scientific Computing Research (ASCR) is supporting CRD’s role in the GLIMMER-CISM code development through the Ice Sheet Initiative for Climate at Extreme Scale (ISICLES).  The Berkeley-ISICLES (BISICLES) project includes Xioaye Li (Scientific Computing Group), Daniel Martin (Applied Numerical Algorithms Group), and Samuel Williams (Future Technologies Group) in CRD, as well as Woo-Sun Yang in NERSC.

 
PNNL Project Will Generate New Approach for Modeling Ice Sheets

PNNL scientists recently won an ASCR-funded project to develop a new, novel approach for modeling ice sheets.  The approach will improve the predictive ability of climate models. Existing ice sheet models do not fully capture all the important mechanisms of ice-sheet evolution, and as a result, their predictions are highly uncertain and may substantially underestimate the rate of ice melt.  Mathematical modeling of ice sheets is complicated by the non-linearity of the underlying processes, and their governing equations and boundary conditions.  Standard grid-based methods require complex front-tracking techniques; they are not good at handling large material deformations and they have limited scalability.  Significant model improvements across the range of relevant processes and scales are required to better quantify and understand the rate of ice cover change at high latitudes and the long-term impacts of reduced ice cover.

Researchers at PNNL will model the ice sheet with full 3D momentum conservation equation, coupled with mass and energy conservation equations and subject to the appropriate boundary conditions, and investigate the effects of neglected terms in the existing models. To solve the non-linear governing equations, researchers will develop new highly scalable algorithms based on SPH, a fully Lagrangian particle method.  The model development and validation will be done in collaboration with the BER project “Improving the Characterization of Clouds, Aerosols and the Cryosphere in Climate Models.”

For more information, contact alexandre.tartakovsky@pnl.gov
 
Berkeley Researchers Win NSF PetaApps Award for CMB Applications

Experts predict that the volume of Cosmic Microwave Background (CMB) data will increase 1000-fold over the next 15 years and push the limits for scientific computing. With an award from National Science Foundation's (NSF) PetaApps program, Horst Simon and Julian Borrill will be co-leading a project to develop a new benchmark tool for testing whole-system performance on emerging extreme-scale supercomputing systems. This benchmark will also ensure that future computing systems will meet the demands of the CMB community.

 
LBNL Researchers Prepare U.S. Climate Community for 100-Gigabit Data Transfers

As researchers around the world tackle the issue of global climate change, they are both generating and sharing increasingly large amounts of data.  This increased collaboration helps climate scientists better understand what is happening and evaluate the effectiveness of possible mitigations.  But sharing these increasingly large datasets requires reliable high-bandwidth networks.  To help ensure that the climate research community has the resources necessary to access, transfer and analyze the data, the Department of Energy has funded several related projects at Lawrence Berkeley National Laboratory. The newest project, called Climate 100, will help the research community effectively use the planned 100-gigabit-per-second networks.  Climate 100, funded with $201,000 under the American Recovery and Reinvestment Act, will bring together middleware and network researchers to develop the needed tools and techniques for moving unprecedented amounts of data.

 
NERSC Uses Stimulus Funds to Overcome Software Challenges for Scientific Computing

A multi-core revolution is occurring in computer chip technology.  No longer able to sustain the previous growth period where processor speed was continually increasing, chip manufacturers are instead producing multi-core architectures that pack increasing numbers of cores onto the chip.  In the arena of high performance scientific computing, this revolution is forcing programmers to rethink the basic models of algorithm development, as well as parallel programming from both the language and parallel decomposition process.

To ensure that science effectively harnesses this new technology, the Department of Energy’s (DOE) National Energy Research Scientific Computing Center (NERSC) is receiving $3.125 million in stimulus funds over the next two years from the American Recovery and Reinvestment Act to develop the Computational Science and Engineering Petascale Initiative. As part of this program, NERSC will hire eight post-doctoral researchers to help design and modify modeling codes in key research areas such as energy technologies, fusion and biosciences, to run on emerging many-core systems.

 
Simulation Examines the Mysteries of Carbon-14

A team led by David Dean of ORNL is using ORNL’s petascale Jaguar supercomputer to examine the carbon-14 nucleus.  This isotope’s 5,700-year half-life is a boon to archeologists and historians, but a mystery to nuclear physicists.  Allowing researchers to date carbon-containing relics going back as far as 60,000 years, it does not seem to fit in with the half-lives of its nearest nuclear neighbors, which are typically a few minutes or even a few seconds.

Dean and his teammates—Hai Ah Nam of ORNL, James Vary and Pieter Maris of Iowa State University, and Petr Navratil and Erich Ormand of Lawrence Livermore National Laboratory—are exploring the carbon-14 nucleus with an application known as Many Fermion Dynamics, nuclear (MFDn), created by Vary.  The team used nearly 150,000 of Jaguar’s more than 180,000 computing cores (the entire XT5 partition of the machine), and the application is ready to scale to even more cores as they become available.

Jaguar’s power allows the team to depart from other nuclear structure studies in three important respects.  It is working directly from the strong-force interactions of the quarks and gluons within each nucleon, taking a “no-core” approach that incorporates all 14 nucleons without assuming an inert set of particles, and incorporating three-body forces.

Contact:  Sue Chin, sue.chin@pnl.gov

PEOPLE:

Argonne Scientists Pieper, Wiringa Awarded Bonner Prize in Nuclear Physics

Steven Pieper and Robert Wiringa, senior scientists at Argonne National Laboratory, have won the 2010 Tom W. Bonner Prize in nuclear physics.  The award will be presented by the American Physical Society in Washington, D.C., in February 2010.  Pieper and Wiringa won the prize for developing and applying models of nuclear forces and methods to calculate the properties of light nuclei.  To conduct computations of ever larger nuclei, Pieper has been developing new algorithms and taking advantage of the foremost computer platforms available.  Recently, in collaboration with researchers in Argonne's Mathematics and Computer Science Division, Pieper has enhanced a quantum Monte Carlo program to model nuclear states up to carbon-12.  This work, funded by a DOE Scientific Discovery through Advanced Computing (SciDAC) grant, resulted in a novel subroutine library for using massive parallel computers.  Key to this effort has been access to the IBM Blue Gene/P supercomputer in the Argonne Leadership Computing Facility.  Pieper and his colleagues receive large amounts of computer time on the Blue Gene/P through a special grant from the DOE Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.  To date, the researchers have been able to use more than 133,000 processors of the Blue Gene/P for their complex calculations.

 
Two Sandia-ASCR Researchers named Distinguished ACM Members

Bruce Hendrickson and Mike Heroux of Sandia National Laboratories have been promoted to the category of Distinguished Member of the ACM.  The Distinguished Member Grade recognizes those ACM members with at least 15 years of professional experience who have achieved significant accomplishments or have made a significant impact on the computing field.  Hendrickson and Heroux were among 58 Distinguished Scientists named for 2009.

 
LBNL's Juan Meza Named One of Hispanic Business Magazine's "100 Influentials"

Juan Meza, head of the High Performance Computing Research Department in Berkeley Lab’s Computational Research Division, has been named one of Hispanic Business magazine’s 100 influential Hispanics.  Published in the October issue, the list includes Hispanics who play leading roles in politics, business, science, information technology and other areas.

 
OLCF Names UT’s Barker New User Assistance and Outreach Group Lead

Ashley Barker, information technology (IT) manager at the University of Tennessee’s Office of Information Technology, joined the OLCF as group leader of the User Assistance and Outreach (UAO) group.  The OLCF is home to the world’s fastest supercomputer for open research, a Cray XT5 machine known as Jaguar.  The UAO team at the OLCF is responsible for setting up accounts for new HPC users, providing high-level technical support, creating documentation about OLCF systems access and policy procedures, and producing publications about the cutting-edge science projects run on Jaguar.  Barker will be responsible to ensure that OLCF maintains HPC user assistance techniques, timely news releases and highlights about breaking science discoveries, and up-to-date documentation on OCLF high-performance systems.

 

FACILITIES/INFRASTRUCTURE:

DOE’s Magellan Project to Explore Scientific Cloud Computing at ANL, LBNL

A new $32 million program funded by the American Recovery and Reinvestment Act through the U.S. Department of Energy (DOE) will examine cloud computing as a cost-effective and energy-efficient computing paradigm for scientists to accelerate discoveries in a variety of disciplines.  A major goal of the program is to explore whether cloud computing can help meet the overwhelming demand for mid-range scientific computing.  Since the project is exploratory, it’s been named Magellan in honor of the Portuguese explorer who led the first effort to sail around the globe and for whom the “clouds of Magellan” — two small galaxies in the southern sky — were named.

To test cloud computing for scientific capability, DOE centers at the Argonne Leadership Computing Facility (ALCF) in Illinois and the National Energy Research Scientific Computing Center (NERSC) in California will install similar mid-range computing hardware, but will offer different computing environments.  The combined set of systems will create a cloud testbed that scientists can use for their computations while also testing the effectiveness of cloud computing for their particular research problems.  The NERSC and ALCF facilities will be linked by a groundbreaking 100 gigabit-per-second network, developed by DOE’s ESnet (another DOE initiative funded by the Recovery Act).  Such high bandwidth will facilitate rapid transfer of data between geographically dispersed clouds and enable scientists to use available computing resources regardless of location.  Read the joint ANL-LBNL news release at the following link:.

http://www.lbl.gov/CS/Archive/news101409.htmlExternal link
 
NERSC Takes Delivery of First Phase of Next-Generation Petaflop/s System

The first phase of NERSC’s next-generation supercomputer was delivered to the Lawrence Berkeley National Laboratory’s Oakland Science Facility on October 12.  The system that was delivered is a Cray XT5™ massively parallel processor supercomputer.  When completed, the new system will deliver a peak performance of more than one petaflops.  This machine is named after Rear Admiral Grace Murray Hopper, who was an American computer scientist and United States Naval officer.  Watch the slide show at the following link:

 
ORNL Establishes Climate Change Institute

Oak Ridge National Laboratory has created the Oak Ridge Climate Change Science Institute to coordinate the laboratory's efforts in climate research.  Led by Director James Hack and Deputy Director David Bader, the institute will integrate modeling, observational, experimental, and computational climate science efforts from across ORNL.

In addition, the institute will apply the talents of 100-plus ORNL staffers working on climate change research to produce quantitative scientific knowledge to address the consequences of climate change and promote collaborations among scientists.

 
ORNL's Cloud Computing Effort Accepted into the Windows Azure Metro Program

Through the efforts of Rob Gillen in ORNL’'s Computer Science Research group, ORNL has been nominated and accepted into the Windows Azure "Metro" program.  Azure is one of the three major cloud computing languages today.  Metro is a variant of the standard Microsoft "TAP" (Technical Adoption Program) that they make available to select partners.

The Metro program is very selective; Microsoft only picks approximately 20 customers worldwide and offers them an inside connection to the product team and testing team in exchange for feedback and willingness to be included in a case study.

OUTREACH & EDUCATION:

LBNL Vis Researchers Contribute to IEEE Visualization 2009

IEEE Visualization 2009External link, a forum for visualization advances in science and engineering for academia, government, and industry, was held October 11–16 in Atlantic City, New Jersey, and two members of Berkeley Lab’s CRD/NERSC Visualization and Analytics Group contributed to three tutorials and a workshop.

Gunther Weber co-presented a tutorial on “Scalar Topology in Visual Data Analysis” and a workshop on “REVISE: Refactoring Visualization from Experience.” Hank Childs co-presented tutorials on “Visualization and Analysis Using VisIt” and “Visualization of Time-Varying Vector Fields.”

 
IEEE Visualization 2009 Tutorial by LBNL, ORNL Experts Gets Rave Review

The VisIt tutorial at the IEEE Visualization 2009 meeting, co-presented by Hank Childs of Berkeley Lab and Sean Ahern of ORNL, got a rave review in the online magazine VizWorld.  A live demonstration of remote visualization and analysis with VisIt over the conference WiFi to NERSC’s SGI Altix, DaVinci, worked flawlessly the first time, “much to everyone’s surprise,” according to the review.  The reviewer was also impressed by VisIt’s histogram-based parallel coordinates display, a technology invented by the LBNL Vis Group for Cameron Geddes’ accelerator project last year, and presented in the SC2008 technical program.

The review concludes: “I haven’t taken a good long in-depth look at VisIt in about two years, but after this I’ve decided I’m going to.  The functionality and power is simply too big to ignore.  The work they’ve put into support for massively parallel systems and HPC configurations is astounding, and combined with the impressive catalog of data analysis tools it really looks more ‘useful’ than their competitors like ParaView and EnSight.”

 
NERSC Hosts HEPiX Workshop for High Energy Physics Community

NERSC hosted the HEPiX Workshop October 26–30 at Berkeley Lab, drawing nearly 70 participants.  The HEPiX meetings bring together IT system support engineers from the High Energy Physics (HEP) laboratories and institutes, such as BNL, CERN, DESY, FNAL, IN2P3, INFN, JLAB, NIKHEF, RAL, SLAC, TRIUMF and others.  Jay Srinivasan and Iwona Sakrejda of NERSC were the local co-chairs, and NERSC Director Kathy Yelick gave the keynote speech. Other presentations by NERSC and ESnet staff included:

  • Brent Draney: The Magellan Cloud Computing Project at NERSC
  • Jason Hick: HPSS in the Extreme Scale Era
  • Tom Davis: Unified Performance and Environment Monitoring using Nagios, Ganglia and Cacti
  • Andrew Uselton: Deploying and Using the Lustre Monitoring Tool
  • Joe Burrescia: ESnet: Networking for Science
 
Sandian Bochev Teaches Short Courses in Germany

Sandian Pavel Bochev gave two short courses at the EU Regional School on computational science, sponsored by the Aachen Institute for Advanced Study in Computational Engineering Sciences, Germany.  The courses covered topics from theoretical foundations of compatible discretizations, to software tools for advanced numerical PDE’s in Trilinos.  A significant part of the material presented at the courses is derived from Bochev’s ASCR-sponsored research on advanced discretizations.

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:44 AM