ASCR Monthly Computing News Report - August 2008

The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.

In this issue...


Blue Gene/P Simulations Shed Light on Key Process in Type Ia Supernovae
In their study of Type Ia supernovae, among the brightest and most powerful exploding stars in the universe, University of Chicago researchers have addressed a critical question about buoyancy-driven turbulent nuclear combustion, a key physical process in these explosions.
Using the FLASH code on the IBM Blue Gene/P supercomputer at the Argonne Leadership Computing Facility, researchers addressed the question, "Is buoyancy-driven turbulent nuclear combustion due primarily to large-scale or small-scale features of the flame surface?" They used more than 40 million processor-hours on the BG/P to run a grid of simulations for different physical conditions. The research team also developed parallel processing tools needed to analyze the large amounts of data produced by the FLASH simulations of buoyancy-driven turbulent nuclear combustion. Preliminary analysis of these results showed that the flame surface is complex at large scales and smooth at small scales.
The results have been published in the SciDAC 2008 conference proceedings. These findings will be used to treat buoyancy-driven turbulent nuclear combustion more accurately in the whole-star, three-dimensional simulations of Type Ia supernovae at the DOE NNSA ASC/Alliance Flash Center, The University of Chicago.
Contact: Donald Lamb, d-lamb@uchicago.edu
Astrophysicists Use OLCF's Jaguar to Simulate Dark Matter that Cradles a Galaxy
A team led by astrophysicist Piero Madau of the University of California at Santa Cruz has performed the largest computer simulation ever of dark matter evolving in a galaxy such as the Milky Way. Madau and his collaborators performed the simulation on the National Center for Computational Science’s Jaguar supercomputer, dividing the galaxy’s envelope of dark matter into a billion parcels and showing how they would evolve over 13 billion years. The collaborators reviewed the simulation and their findings in the August 7 issue of the journal Nature. Their article is entitled "Clumps and Streams in the Local Dark Matter Distribution.”
The simulation performed by Madau and his teams used about 1 million processor hours, following a galaxy worth of dark matter through nearly the entire history of the universe. It was a staggering job, tracking 9,000 trillion trillion trillion tons of invisible stuff spread across 176 trillion trillion trillion square miles as it evolved over 13 billion years. Dark matter is not evenly spread out, although researchers believe it was nearly homogeneously distributed right after the Big Bang. Over time, however, it became bunched and clumped as gravity pulled it together, first into clumps more or less the mass of Earth. These were pulled together into larger clumps, which were pulled together into still larger clumps, and so on until they combined to form halos of dark matter massive enough to host galaxies. Madau’s team will be able to verify its simulation results using the National Aeronautics and Space Administration’s Gamma-Ray Large Area Space Telescope (GLAST), launched on June 11.
Contact: Jayson Hines, hinesjb@ornl.gov
DOE Software Used to Identify Biochemical Pathways in Microbial Community
Today's powerful sequencing machines can rapidly read the genomes of entire communities of microbes, but the challenge is to extract meaningful information from the jumbled reams of data. In a paper posted online in Nature Biotechnology, researchers from the University of Washington, the U.S. Department of Energy Joint Genome Institute (DOE JGI), Lawrence Berkeley National Laboratory, and several other institutions describe a novel approach for extracting single genomes and discerning specific microbial capabilities from mixed community ("metagenomic") sequence data.
For the first time, using an enrichment technique applied to microbial community samples, the research team explored the sediments in Lake Washington, bordering Seattle, and characterized biochemical pathways associated with nitrogen cycling and methane utilization, important for understanding methane generation and consumption by microbes. Methane is both a greenhouse gas and a potential energy source. Most of the microbes that oxidize single-carbon compounds are unculturable and therefore unknown, as are the vast majority of microbes on Earth. To find species of interest, the researchers sequenced microbial communities from Lake Washington sediment samples, because lake sediment is known to be a site of high methane consumption. However, these sediment samples contained over 5,000 species of microbes performing a complex, interconnected array of biochemical tasks.
"The DOE JGI's unique Integrated Microbial Genomics with Microbiome Samples (IMG/M) data management system was used for detailed annotation, and was instrumental for efficient comparative analysis and metabolic reconstruction of the samples," said Alla Lapidus, microbial geneticist at the DOE JGI and co-author on the paper. IMG/M was jointly developed by DOE JGI and Berkeley Lab's Biological Data Management and Technology Center (BDMTC). BDMTC Department Head Victor Markowitz and staff scientist Ernest Szeto performed the data processing for this study and were co-authors.
Contact: Jon Bashor, jbashor@lbl.gov
Journal Cover Highlights Jaguar Simulations by Team from ORNL
Fusion simulations performed on the Oak Ridge Leadership Computing Facility (OLCF) Cray XT4 Jaguar supercomputer are featured in the cover article of July's edition of the journal Physics of Plasmas, published by the American Institute of Physics. A team led by Oak Ridge National Laboratory physicist Fred Jaeger used its AORSA code to demonstrate that radio waves will be effective in heating the multinational ITER fusion reactor. The team's article is entitled "Simulation of high-power electromagnetic wave heating in the ITER burning plasma," and the magazine's cover features an image created by OLCF visualization specialist Sean Ahern.
The ITER reactor will use antennas to launch radio waves carrying 20 megawatts of power into the fusion plasma, an ionized gas containing deuterium and tritium. The waves will both heat the plasma-which must reach a temperature about ten times hotter than the center of the sun-and create a current that controls it. The team's simulations will help the reactor's designers configure the antennas to make the most of that power in both these areas. The team is part of a Scientific Discovery through Advanced Computing (SciDAC) project known as the SciDAC Center for Simulation of Wave-Plasma Interactions. Its AORSA code has been especially effective at making use of Jaguar's computing power, reaching 154 trillion calculations a second.
Contact: Jayson Hines, hinesjb@ornl.gov
Sandia Researchers Release a new Multiscale Capability within LAMMPS
Sandia researchers under ASCR’s AMR Multiscale Mathematics program have implemented peridynamics within Sandia’s massively parallel molecular dynamics simulator, LAMMPSExternal link. Peridynamics is a generalized continuum theory that employs a nonlocal model of force interaction to describe material properties. In this context, nonlocal means that continuum points separated by a finite distance may exert force upon each other. This is accomplished by replacing the local stress/strain relationship of classical elasticity by a nonlocal integral operator that sums forces over particles separated by a finite distance. This integral operator is not a function of the deformation gradient, allowing for a more general notion of deformation than classical elasticity. Further, the nonlocality of the model makes it suitable as a multiscale material model, as its behavior varies in accordance to the length scale to which it is applied. A particular meshless discretization of the peridynamic continuum model has the same general computational structure of molecular dynamics. This allowed its implementation within Sandia’s molecular dynamics simulator (LAMMPS) and enables users familiar with molecular dynamics to effectively model continuum materials. The peridynamics extensions made to the LAMMPS package are available for download as part of LAMMPSExternal link and represents the only publicly available peridynamic code. For more information, see: Michael L. Parks, Richard B. Lehoucq, Steven J. Plimpton, and Stewart A. Silling, Implementing Peridynamics within a Molecular Dynamics Code, Accepted for publication in Computer Physics Communications, 2008. (doi:10.1016/j.cpc.2008.06.011).
Contact: Michael Parks, mlparks@sandia.gov
Sandia System Software Researchers Receive SC'08 Best Paper Nomination
A paper authored by Sandians Kurt Ferreira and Ron Brightwell together with Patrick Bridges from the University of New Mexico has been nominated for the Best Paper and Best Student Paper awards at the upcoming SC08 conference in Austin, Texas. The paper was one of four such papers selected from a total of 59 accepted papers. The paper entitled Characterizing Application Sensitivity to OS Interference using Kernel-Level Noise Injection is a detailed study of how operating system activity can impact the performance of parallel applications on very large-scale machines. In this study, funded under DOE/ASCR's FAST-OS research program, the authors describe how they have extended the Catamount lightweight compute node operating system to allow for creating various kinds of artificial noise to provide more insight into the important characteristics of applications and operating systems that influence performance and scalability.
Contact: Ron Brightwell, rbbrigh@sandia.gov
Sandia AMR Researchers Present on Reduced Order Modeling
At the SIAM annual meeting in San Diego in July, Sandia researchers Khachik Sargsyan, Bert Debusschere and Habib Najm presented their work on predictability and reduced order modeling in stochastic reaction networks. The talk described the use of polynomial chaos expansions with Karhunen-Loeve decompositions of stochastic processes to represent parametric as well as intrinsic uncertainties in stochastic dynamical systems. Domain and data decomposition methods such as adaptive multi-resolution analysis and clustering were used to improve the robustness of the spectral representations. The methodology was demonstrated on bistable systems of biophysical interest and showed the ability to capture bimodal distributions and their dependence on model parameters.
Contact: Bert Debusschere, bjdebus@sandia.gov
ASCR Report Released on The Mathematics of the Analysis of Petascale Data
On June 3-5, 2008, ASCR's AMR program held a workshop define a research agenda for the mathematical techniques needed to meet the analysis challenges posed by petascale data sets. The workshop report presents nine specific findings, each motivated by integrated consideration of the application domain challenges brought out at the meeting. The final report was submitted to the DOE ASCR Applied Mathematics Program on August 4, and a summary of the findings was briefed to the DOE Advanced Scientific Computing Research Advisory Committee on August 5. The report is available for download from the ASCR Applied Math Web Site.
Contact: Philip Kegelmeyer, wpk@sandia.gov
LBNL's "A Computer for the Clouds" Is Discussed in IEEE Spectrum
The August issue of IEEE Spectrum, the flagship publication of the IEEE, features a story titled "A Computer for the Clouds" about Michael Wehner, Lenny Oliker and John Shalf's climate computer project (nicknamed the Green FlashExternal link, although that term is not used in the magazine). The article discusses the project by the Berkeley Lab team in the contexts of energy efficiency, special-purpose scientific computers, and climate science, noting that "it should be able to remedy today's inability to model clouds well enough to tell whether their net effect is to warm the world or cool it." The article includes quotations from Wehner and Horst Simon, as well as Jack Dongarra playing devil's advocate. You can read it at the following link: http://www.spectrum.ieee.org/aug08/6461External link
Contact: Jon Bashor, jbashor@lbl.gov
INCITE Groundwater Simulation Addresses Challenges of Carbon Sequestration
One proposal for mitigating the effect of coal power on the earth's climate involves separating carbon dioxide-or CO2-from power plant emissions and pumping it deep underground, where it can remain indefinitely dissolved in the groundwater or converted into a solid form of carbonate minerals. A team of researchers led by Peter Lichtner of Los Alamos National Laboratory is using the National Center for Computational Sciences' Jaguar supercomputer to simulate this process, known as carbon sequestration, searching for ways to maximize the benefits and avoid potential drawbacks. Using Jaguar, the team has been able to conduct the largest groundwater simulations ever seen, pursuing its research with an application known as PFLOTRAN.
The team is studying a process known as "fingering" that increases the surface area between the CO2 and the surrounding groundwater, thereby minimizing potential pitfalls by accelerating the dissolution of the CO2. "The problem is that we're talking about injecting huge amounts of CO2 by volume," Lichtner explained. "If you were injecting it into a deep saline aquifer, for example, you would initially have to displace the brine that was present, and then the question is, 'Where does that go?' It's a race against time how rapidly this CO2 will dissipate."
Contact: Jayson Hines, hinesjb@ornl.gov
Sandia Releases I/O Trace Data
The research community has long desired traces at large scale for real applications as synthetic benchmarks lack the fidelity and credibility of actual traces. As part of the Petascale Data Storage Institute (PDSI), Sandia researcher Lee Ward has released input-output (I/O) system-call trace data from two representative runs of Sandia’s ALEGRA simulationExternal link shock and multiphysics code suite. The two runs, performed on the Sandia/NNSA RedStorm supercomputerExternal link, captured information about four checkpoint dumps, run logs, and terminal I/O. The two runs used 2,744 nodes and 5,832 virtual nodes, respectively. Trace data and a short paper describing format and the environment from which the data was obtained may be found at www.pdsi-scidac.org/researchExternal link.
Contact: Lee Ward, lee@sandia.gov
ORNL Researchers Establish Universal Model for Carbon-Based Supercapacitors
A team led by Vincent Meunier and Bobby Sumpter of Oak Ridge National Laboratory (ORNL) have used ORNL supercomputers to improve the understanding of the processes governing the anomalous increase of normalized capacitance in nanometer-size nanoporous carbon. The team examined the details of the charge transfer and energetics of a variety of electrolytes in carbon nanopores, which were modeled as perfect cylindrical objects. The findings indicate that the experimentally observed exponential increase in capacitance for pores smaller than a nanometer is related to the adsorption of desolvated ions (as opposed to ions dressed with a solvant shell) inside pores that are too small to accommodate the dressed ions.
The findings were reported in two recent papers (J. Huang, B.G. Sumpter, and V. Meunier, Angew. Chem. 120, 3440 (2008); ibid Chem. Eur. J. 14, 6014 (2008)) and are expected to open a new technological route for ultracapacitors based on computer design and atomistic modeling. (Contact: Vincent Meunier, meunierv@ornl.gov)
Contact: Jayson Hines, hinesjb@ornl.gov
PNNL Research Shedding Light on Complex Reactions at Interfaces
Using computational resources from INCITE and Pacific Northwest National Laboratory's NWICE, researchers are shedding light on complex chemical reactions at interfaces. The new insights have applications in the development of novel materials as well as the design of better chemical processes and catalysts. The PNNL computational approach sought to resolve a mystery that had eluded experimentalists - where are hydroxide ions located on molecules as they move from the liquid to the gaseous state. This new understanding has far reaching implications for chemistry that can occur at hydrophobic interfaces important to chemical transformations. Using umbrella sampling as its statistical mechanical sampling method in conjunction with density functional theory interaction potentials, the researchers performed some of the largest calculations to date.
Density functional theory provided the compromise between speed and accuracy to help clarify chemical phenomena in heterogeneous environments. The researchers' simulations are clarifying the free-energy surface of reactions that a hydroxide anion can undergo as a function of the interfacial depth. Initial results indicate that the hydroxide anion behaves as a traditional anion.. Near the liquid-vapor interface, it resembles a delocalized anion. Their new information will be used to draw conclusions about the whereabouts of the hydroxide anion and the implications towards our understanding of chemical reactions at hydrophobic interfaces and the pH of these interfaces.
Contact: Chris Mundy, chris.mundy@pnl.gov
LANL Develops New Parameterization of Small Scales in Rotating. Stratified Flows
An important goal for realistic computations of multiscale systems such as the ocean, atmosphere or climate is accurate parameterization of complex physics in terms of simplified representations, such as linear equations with coefficients determined from theory, experiments and/or observations. Such parameterizations can then be incorporated into models. For example, in meteorological models, large-scale weather patterns evolve based on parameterization of small-scale turbulence. LANL scientist Susan Kurien obtained a new theoretical prediction for the distribution of kinetic and potential energy in the turbulent small scales of rapidly rotating and stratified flow, without assuming quasi-geostrophy. The latter is a commonly used approximation which filters fast wave motions from the parent equations. Such approximation is often convenient but over-simplified because real physical systems such as the ocean and atmosphere exhibit strong coupling between the fast waves and the coherent large scales (jets, fronts, vortices and layers) over long time scales. The main result is that kinetic energy is suppressed in the small, horizontal scales while potential energy is suppressed in the small, vertical scales. These constraints arise from studying the statistics of a key conserved quantity, the potential enstrophy. High-resolution numerical simulations of rotating and stratified Boussinesq flows were performed in collaboration with Mark Taylor (Sandia) and Beth Wingate (LANL) to test and verify the theoretical predictions. The results provide proper parameterization of small scales which are too expensive to calculate explicitly in realistic simulations of ocean/atmosphere dynamics. This work highlights the importance of proper parameterization of fast dynamics for accurate simulation of long-time weather prediction, ocean circulation and climate change. This work will appear in Europhysics Letters.
Contact: Susan Kurien, skurien@lanl.gov


Google's Steve Cotter to Head ESnet
Steve Cotter, who has 10 years of experience in designing and deploying research and commercial networks at the national and international scale, has been named as the new head of ESnet, the Department of Energy's high-speed network supporting science around the world. Cotter, who most recently served as Google's network deployment manager for Europe, the Middle East and Africa, assumes his new job on Friday, August 29. ESnet, or the Energy Sciences Network, is managed by Lawrence Berkeley National Laboratory for the Department of Energy. ESnet provides direct connections to more than 40 DOE sites, as well as fast interconnections to more than 100 other networks. Funded principally by DOE's Office of Science, ESnet services allow scientists to make effective use of unique DOE research facilities and computing resources, independent of time and geographic location.
Before joining Google in 2007, Cotter worked for Internet2, a high performance network serving more than 300 institutions in the research and education community in the U.S. Since 2006, ESnet and Internet2 have worked as partners in building ESnet's next-generation infrastructure. Cotter succeeds Bill Johnston, who is retiring from Berkeley Lab after more than 35 years.
Contact: Jon Bashor, jbashor@lbl.gov
Council on Competitiveness' Suzy Tichenor Joins ORNL as Business Liaison
Suzy Tichenor, recently of the Washington-based Council on Competitiveness, has joined ORNL's Computing and Computational Sciences Directorate as director of its Industrial Partnership Program. Tichenor comes to ORNL from the Washington-based council, where she served as vice president and director of its High-Performance Computing (HPC) Initiative. Tichenor has more than 20 years' experience creating partnerships and programs from within the government, the private sector, and not-for-profit organizations. Her appointment underscores ORNL's commitment to working with industry. She will be the principal interface between the ORNL computing organization and industry.
"Suzy has a unique understanding of how industry is using high-performance computing to drive innovation and productivity," said Thomas Zacharia, ORNL associate laboratory director for computing and computational sciences. "She will bring tremendous energy and focus to our industrial outreach activities."
Contact: Jayson Hines, hinesjb@ornl.gov
Book on Multilevel Block Factorization Preconditioners by LLNL Author Released
Panayot Vassilevski's new book, entitled "Multilevel Block Factorization Preconditioners: Matrix-based Analysis and Algorithms for Solving Finite Element Equations," was released early in August. This monograph is the first to provide a comprehensive, self-contained and rigorous presentation of some of the most powerful preconditioning methods for solving finite element equations in a common block-matrix factorization framework. The broad array of topics covered ranges from classical incomplete block-factorization preconditioners to highly efficient techniques such as multigrid, algebraic multigrid and domain decomposition. Panayot Vassilevski is a senior researcher and part of the hypre research team at Lawrence Livermore National Laboratory. His technical contributions have been developed, in part, with support from the ASCR office and have been widely used to impact a number of DOE applications in both SciDAC and the NNSA. Information about the book can be found at the following link: http://www.springer.com/math/algebra/book/978-0-387-71563-6External link
PNNL's Xin Sun recognized for Exceptional Engineering Achievement
Creativity and versatility are the hallmarks of Dr. Xin Sun's research at Pacific Northwest National Laboratory. And, they earned her the 2008 Laboratory Director's Award for Exceptional Engineering Achievement. The PNNL award honors Sun's development of advances in modeling and processes in materials and materials joining, leading to improvements in the automobile industry and other arenas.. Sun's technical achievements have influenced the U.S. automobile industry-particularly in the understanding of weld strengths and the welding process. One of her colleagues observes that her technical achievements are directly reflected in General Motor's new generation of vehicles. These advances in working with lightweight materials support the national energy efficiency mission of DOE.
Sun, who received her Ph.D from the University of Michigan in Ann Arbor in 1995, also has been a primary contributor to the development of modeling and simulation capabilities for Solid Oxide Fuel Cells (SOFCs), which are recognized globally by scientists and engineers in academia and industry. The multi-physics (MP) modeling capabilities are packaged in a software tool called SOFC-MP. SOFC-MP is now a commercial software tool used by leading fuel cell developers.
Anders Petersson Gives Invited Topical Presentation at 2008 SIAM Annual Meeting
Anders Petersson, Lawrence Livermore National Laboratory, gave one of only 20invited presentations at the SIAM annual meeting held July 7-11, 2008. His talk focused on the large-scale simulation of earthquakes using new techniques that solve the elastic wave equations in the time domain. The mathematics and numerical methods that he has developed have been funded, in large part, through the ASCR base Applied Math program. His calculations were run on the large-scale computing resources at LLNL and used computational grids with up to 4B grid points. The annual meeting is the flagship conference for SIAM, and attracted over 1100 attendees in 2008.
Contact: Erich Strohmaier, EStrohmaier@lbl.gov


Upgraded Jaguar's Transition to Operations Features Six Pioneering Applications
Six select software applications have been running pioneering "science-at-scale" simulations on high-performance computers at Oak Ridge National Laboratory. The simulations, carried out at the Oak Ridge Leadership Computing Facility (OLCF), employ most or all of the processing cores of the center's flagship system, a Cray XT4 supercomputer called Jaguar that was upgraded in May to perform 263 trillion calculations per second, or teraflops. The upgrade increased Jaguar's ability to further contribute to the Department of Energy's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.
Running computationally demanding software applications after a major machine upgrade is part of a transition-to-operations activity, dubbed T2O, that allocates up to 4.5 million hours to each application that can concurrently use the majority of Jaguar's 31,000 processing cores. When a commissioned OLCF system has passed a formal acceptance test, it immediately enters the T2O phase, during which time its performance is monitored and assessed. One of the first researchers to put Jaguar to the test this spring was Jacqueline Chen of Sandia National Laboratories, whose simulations of ethylene, a hydrocarbon fuel, required 4.5 million hours running on 30,000 processors and generated more than 50 terabytes of data-more than five times as much data as contained in the printed contents of the U.S. Library of Congress.
Contact: Jayson Hines, hinesjb@ornl.gov
New Supercomputer at PNNL Positioning for Prime Time
Researchers at EMSL, the Environmental Molecular Sciences Laboratory national user facility at Pacific Northwest National Laboratory, are awaiting the final piece of a new state-of-the art supercomputer to be delivered and installed. Chinook, a $24-million HP-developed supercomputer that is replacing the current system in EMSL, is targeted for full operation this fall. Chinook will support chemistry, biology, and environmental computation and is expected to have a total peak performance of about 163 teraflops.
HP began delivery of Chinook to EMSL in March and additional pieces of the system were delivered in July. The remaining components of Chinook are expected to be delivered to EMSL at the end of August.. As of August 1, the portion of Chinook that was delivered and installed in March has fully absorbed the old system's workload. The new system is running nearly all science codes that the older system had previously run.
Contact: Tom McKenna, tom.mckenna@pnl.gov and Kevin Regimbal, kevin.regimbal@pnl.gov
NERSC Adds New Measure to Boost Security
In order to further enhance its defenses against cyber attacks and still maintain an open scientific environment for its 3,000 users, NERSC is installing SSH daemons capable of monitoring keystrokes of incoming traffic. The system has already been installed on Franklin, the 19,344-processor Cray XT4 supercomputer. The same layer of protection will be added to NERSC's other computing resources in August and September. The SSH daemon will provide monitoring information to Bro, LBNL's intrusion detection system which automatically blocks suspicious traffic and notifies security personnel.
Contact: Jon Bashor, jbashor@lbl.gov


Third Town Hall Meeting on Cybersecurity Activities Scheduled
The CyberSecurity Grass Roots Community is hosting a Third Town Hall Meeting in Washington, D.C. in the fall.  The community is working to develop a science-driven, proactive, innovative R&D agenda for DOE to address the overall national concern about cyber threats.  The fall meeting will be held October 22-24, building on the work begun in previous town hall meetings, with an emphasis on security research and development strategy. For more information, please contact Deb Frincke (PNNL), Ed Talbot (Sandia National Laboratories), Brian Worley (Oak Ridge National Laboratory), or Charlie Catlett (Argonne).. Also, please see the wiki https://wiki.cac.washington.edu/display/doe/HomeExternal link. Also interested individuals are invited to join the LinkedIn group "Transforming Cyber Security Through Science" http://www.linkedin.com/e/gis/120418/0006E4433488External link.
ALCF Hosts Open House for Argonne, DOE, University of Chicago Employees
Approximately 350 Argonne, U.S. Department of Energy, and University of Chicago employees attended an open house hosted by the Argonne Leadership Computing Facility (ALCF) on August 14. The ALCF is home to the world's fastest open science computer-Intrepid-IBM's next-generation Blue Gene/P system, with a peak speed of 557 Teraflops. The open house provided employees with an overview of the ALCF and a tour of the supercomputing support facility.
Contact: Chel Lancaster, lancastr@alcf.anl.gov
LBNL Hosts French Delegation Researching Strategic HPC
On Wednesday, Aug. 27, Berkeley Lab Computing Sciences hosted a daylong visit by members of the Strategic Council for HPC, established by the French Ministry of Research to advise the French government on investments and research programs in supercomputing. University of Paris Prof. Olivier Pironneau, who is the president of the Strategic Council, and council member Prof. Michel Kern and Laura Grigori of the INRIA Research Center also visited several universities in the U.S. this summer to learn about cutting edge projects in software development for high performance computing. In addition to technical presentations on LBNL's research in scientific data management, algorithm development and computer architectures, the visitors also sought detailed information on issues ranging from staffing to budgeting to collaborations with the UC Berkeley campus.
Contact: Jon Bashor, jbashor@lbl.gov
ALCF Workshops Focus on Petascale Resources, Open Source Model
Twenty-six attendees learned about the petascale resources available to them at the July 29-31 Leap to Petascale Workshop held at the Argonne Leadership Computing Facility (ALCF). During the workshop, IBM and ALCF performance engineers helped workshop participants scale and tune their applications on the Blue Gene/P computer. Noted one participant, "Being able to interact and work on our problems with the technical staff was the best and unique opportunity of this workshop". Sponsors included the ALCF, Blue Gene Consortium, and Argonne National Laboratory.
The Blue Gene Consortium Open Source Workshop held August 12-13 at Argonne gave consortium members an understanding of the BG/P open source community organization and business model. It covered ongoing and potential research activities, described IBM's involvement, and allowed participants to brainstorm on future activities. Sponsors included the BG Consortium, Argonne Mathematics and Computer Science Division, ALCF, and IBM.
Contact: Chel Lancaster, lancastr@alcf.anl.gov
PNNL Hosts Summer School on Mathematics and Computational Methods
Twenty graduate students from throughout the nation attended the third annual DOE Summer School in Multiscale Mathematics and High Performance Computing. The purpose of the summer school is to help spur continuing advances in multiscale mathematics and high-performance computing, areas that are critical to helping solve complex problems in environment, energy, and national security.
The 2008 event was hosted by Pacific Northwest National Laboratory and held on the campus of the Washington State University Tri-Cities August 4 -6. The aim of this year's summer school was to provide introductions to the mathematical and computational methods commonly used to model physical systems at various scales: continuum methods, discrete methods, statistical methods, network methods surveys of developing mathematical and computational approaches for bridging the scales between current methods, tutorials on parallel computing, and MPI research talks to motivate the development of new methods. This year's event included faculty from PNNL, Washington State University, Oregon State University, Princeton University, Interactive Supercomputing, and Galois, Inc.
Summer Student Symposium at Argonne National Laboratory
Seven students who participated in the Mathematics and Computer Science Division at Argonne National Laboratory presented their work in a Summer Student Symposium in August.
The students spent approximately 10 weeks working closely with computer scientists in Argonne’s Laboratory for Advanced Numerical Simulations on projects related to Argonne’s research programs in numerical computing and computational mathematics. The students developed software tools, numerical solvers and computational techniques ranging from optimization and partial differential equations to input/output methods for advanced computers.
Two of the students were undergraduates funded by the division or by the Department of Energy Science Undergraduate Laboratory Internships program:
  • Heather Cole-Mullen (University of  Chicago) – Implementing Automatic Differentiation Tools in the NEOS Server for Optimization
  • Kyle Schmitt (University of Wisconsin) – Two Creative and Expeditious Variations of Traditional Gaussian Processes
and five were graduate students funded by the division or the Givens Associates program:
  • Nawab Ali  (Ohio State University) – Rethinking I/O in High-Performance Computing Environments
  • Joseph Reed (University of California, San Diego) – Benchmarking Pattern-Search and Nelder-Mead Optimization Algorithms
  • Sean Farley (Louisiana State University) – Enabling PETSc Preconditioners in BOUT Code for Edge Plasma Modeling
  • Mustafa Kilinc (University of Wisconsin) – ASTROS: Active-Set Trust-Region Optimization Solvers
  • Yuchen Wu (Northwestern University) – Reduced Space Quasi Newton Methods for PDE Constrained Optimization
Sven Leyffer, organizer of the symposium, said that the presentations provided an excellent opportunity for the students to showcase the results of their scientific investigations, and praised the students for the high quality of their research and presentations.
History of Fermilab Computing and Networking Showcased in Web Timeline
A timeline covering the last 32 years of computing and networking as seen from the Fermilab vantage point was posted in a zoomable web format at this linkExternal link. Using the controls located on the bottom midpoint edge of the image, viewers can zoom in to view the five separate timelines of Computing/Technology, Networking, Data/Storage, Facilities, and the Experimental Scientific Program from 1975 to 2007.
The Fermilab Computing Division submitted the timeline in a 3D physical spiral form to last November's SC 2007 conference exhibition. (See: http://tinyurl.com/sc07spiralExternal link) The spiral, suitably updated, will reside in the Fermilab booth at SC08 to be held November 15 - 21 in Austin, Texas.
NERSC Hosts Visit by Swiss Center Manager in Exchange of Ideas, Expertise
Ladina Gilly, who wears a number of hats such as HR Manager, Events Manager and Head of Facilities Management at CSCS, the Swiss National Supercomputing Centre, spent more than three weeks at NERSC in July and August. During her stay, Gilly met with a number of NERSC group leads, human resources staff and others. As she is also responsible for the communications program at CSCS, she also talked with NERSC PR staff. Among the topics she was most interested in are energy efficiency, recruiting new staff and more effectively communicating the center's achievements to funding organizations. CSCS is looking to build a new facility within the next few years.
Contact: Jon Bashor, jbashor@lbl.gov
OLCF Hosts IBM Blue Gene/P Workshop
The Oak Ridge Leadership Computing Facility (OLCF) recently hosted an educational workshop on the IBM Blue Gene supercomputing platform. Besides presenting information on the Blue Gene architecture and instructing researchers on ways to most effectively use the platform, the workshop allowed hands-on access to the OLCF's IBM Blue Gene/P. Dubbed "Eugene," the system features 8,192 compute cores and a peak performance of more 27 teraflops. Researchers were given the opportunity to port their codes to the new architecture and run them.
Nearly 30 researchers attended from ORNL and core participating universities, including Duke University, the University of Virginia, and Vanderbilt University. The architect of the Blue Gene/P, Rajiv Bendale, also visited the workshop and shared his expertise. The Blue Gene/P represents the latest advancement of the Blue Gene supercomputing platform, most notable for its scalability (the hardware is optimized to scale to thousands of cores) and energy efficiency. The conference took place from July 29-31.
Contact: Jayson Hines, hinesjb@ornl.gov
Students at ORNL Build "Supercomputer"
Engaging young people in math and science is critical to America's technological and economic future. This is exactly the goal of the Appalachian Research Council's Math, Science, and Technology Institute, held at Oak Ridge National Laboratory from July 12-25. The event brings students and teachers from all over Appalachia to Oak Ridge for an up-close look at scientific research. This year the students and teachers were separated into 12 teams, and each team was given a specific research problem to investigate.
Five students worked closely with Bobby Whitten and Mitchell Griffith of the National Center for Computational Sciences in a research project titled "Build a Supercomputer!" The project involved working as a team and learning basic networking, UNIX, programming, parallel programming, and MPI. The team was asked to build a supercomputer using five Macintosh Minis and find out in which year the resulting system would have been the fastest in the world. The project involved creating a local area network, installing software, writing a "Hello World" parallel program, running the program, and benchmarking the assembled system. Theoretically, the peak performance of the combined Minis was 13 gigaflops, but because of the continual failure of one of the Minis during benchmarking, the resulting peak performance was only 5.8 gigaflops. Therefore, the team concluded, the system would have been the fastest machine in the world in 1990.
Contact: Jayson Hines, hinesjb@ornl.gov








Last modified: 3/18/2013 10:12:49 AM