ASCR Monthly Computing News Report - July 2009

In this issue...


LBNL’s Cecilia Aragon Honored with Presidential Early Career Award

LBNL Computing Sciences’ Cecilia Aragon was among the 100 researchers named by President Barack Obama to receive the prestigious Presidential Early Career for Scientists and Engineers (PECASE) Award, the highest honor bestowed by the United States government on early-career researchers. The award recognized Aragon’s groundbreaking research in data-intensive scientific workflow management, and pioneering development of innovative methods for visualization, analysis, and organization of massive scientific data sets. Together with two other young scientists from LBNL, Aragon will receive her award in the fall at a White House ceremony. For more information about Aragon and her work, see this linkExternal link.

In announcing the awards, President Obama said: “These extraordinarily gifted young scientists and engineers represent the best in our country. With their talent, creativity, and dedication, I am confident that they will lead their fields in new breakthroughs and discoveries and help us use science and technology to lift up our nation and our world.”

PNNL’s Alexandre Tartakovsky Wins Presidential Early Career Award

Alexandre Tartakovsky, a computational mathematician at Pacific Northwest National Laboratory, garnered a Presidential Early Career Award for Scientists and Engineers (PECASE). The award honors Tartakovsky’s research on subsurface flow that addresses past and future energy needs: cleaning up buried nuclear or toxic contaminants and storing carbon dioxide from fossil fuels underground. Tartakovsky is an acknowledged leader in the field of computational mathematics for subsurface flow and transport in heterogeneous media.

Tartakovsky was recognized for his work to understand how contaminants move through the subsurface, that subterranean environment made of rocks, air, liquids like water or oil, and bacteria. Ultimately, such work will help reduce the impacts that nuclear and fossil fuel energy use have on the environment. Tartakovsky develops mathematical models to help researchers clean up nuclear or toxic contaminants from past practices or help future waste managers store carbon in the subsurface. To learn more, see the following link:

CSGF Alum Oliver Fringer Receives Presidential Early Career Award

Oliver Fringer, a DOE Computational Science Graduate Fellowship fellow from 1997-2001, was named a recipient of the 2009 Presidential Early Career Award for Scientists and Engineers (PECASE). Currently an assistant professor of civil and environmental engineering at Stanford University, Fringer’s area of study is parallel coastal ocean modeling. He earned his Ph.D. in civil and environmental engineering from Stanford in 2003, his master’s in aeronautics and astronautics in 1996, also from Stanford, and his B.S.E. in mechanical and aerospace engineering from Princeton University in 1995.

At Stanford, Fringer’s research group focuses on the application of numerical models and parallel computing to the study of laboratory- and field-scale environmental flows. They work on the development and implementation of the parallel, unstructured-grid, Navier-Stokes solver SUNTANS, with application to internal wave dynamics, coastal ocean circulation, and sediment transport. We also employ an adaptive mesh refinement Navier-Stokes code and a parallel large-eddy simulation Navier-Stokes code to study turbulent flow physics related to sediment transport, internal waves and mixing, and coherent structures in rivers and estuaries.

Contact. Oliver Fringer, fringer@stanford.edu
LLNL-Led ROSE Software Wins an R&D 100 Award

The ROSE group, led by Dr. Daniel J. Quinlan at Lawrence Livermore National Laboratory (LLNL), has received the 2009 R&D 100 Award for its software, ROSE: Making Compiler Technology Accessible to All Programmers. ROSE is an open source compiler infrastructure to build source-to-source program transformation and analysis tools for large-scale Fortran 77/95/2003, C, C++, OpenMP, and UPC applications. The intended users of ROSE could be either experienced compiler researchers or library and tool developers who may have minimal compiler experience. ROSE is particularly well suited for building custom tools for static analysis, program optimization, arbitrary program transformation, domain-specific optimizations, complex loop optimizations, performance analysis, and cyber-security. For more information, see the following link...

Argonne Researchers Receive R&D 100 Award for PETSc Software

Researchers from Argonne National Laboratory’s Mathematics and Computer Science Division received an R&D 100 award for PETSc, high-performance software for engineering and science. Judged by R&D magazine, the awards recognize the top scientific and technological innovations of the past year. Argonne scientists have won 105 R&D 100 awards since they were first introduced in 1964.

PETSc is designed to allow engineers and scientists to perform large-scale numerical simulations of physical phenomena rapidly and efficiently. These simulations allow the effects of design decisions to be evaluated and compared, including cost benefits, safety concerns, and environmental impact. The ability to perform simulations allows corporations and governmental agencies to replace costly and dangerous experiments and prototypes. Simulations have led to many new products as well as improvements in existing products.

The principal developers of PETSc are Satish Balay, Argonne senior software developer; Kris Buschelmann, former Argonne software developer; Lisandro Daniel Dalcin, post doctoral researcher with Consejo Nacional de Investigaciones Cientificas y Tecnicas; Victor Eijkhout, University of Texas at Austin research scientist; William Gropp, University of Illinois at Urbana- Champaign professor; Dmitry Karpeev, Argonne assistant computational mathematician; Dinesh Kaushik, Argonne computational scientist; Matthew Knepley, Argonne assistant scientist; Lois Curfman McInnes, Argonne computational scientist; Barry Smith, Argonne senior computational mathematician; and Hong Zhang, Illinois Institute of Technology research associate professor.

Contact. Cheryl Drugan, cdrugan@mcs.anl.gov
Sandia Researchers Win R&D100 Award for Catamount Lightweight Kernel

The Catamount multi-core operating system developed by Suzanne Kelly, John Van Dyke, and Courtenay Vaughan at Sandia National Laboratories has been recognized by R&D Magazine as one of 100 most outstanding advances in applied technologies for 2008. The Catamount N-Way (CNW) lightweight kernel is an operating system that leverages hardware capabilities of multicore processors to deliver significant improvements in data access performance for today’s parallel computing applications. The multi-core version of the Catamount lightweight kernel, funded by DOE/ASCR, contains a novel address mapping strategy that allows processes running on a multi-core processor to easily and efficiently read and write each other’s memory. This scheme, called SMARTMAP, enables significant performance enhancements for intra-node data sharing for the majority of today’s parallel computing applications.. The CNW software is licensed to Cray, Inc., at a non-disclosed price. The project is supported by ASCR and builds on work supported by the NNSA-ASC program. To learn more, see the following link:



Oak Ridge Supercomputers Provide First Simulation of Abrupt Climate Change

Scientists at the University of WisconsinExternal link and the National Center for Atmospheric Research (NCARExternal link) are using supercomputers at Oak Ridge National Laboratory (ORNL) to simulate abrupt climate change and shed new light on an enigmatic period of natural global warming in Earth’s relatively recent history. The work is featured in the July 17 issue of the journal Science and provides valuable new data about the causes and effects of global climate change. In Earth’s 4.5-billion-year history, its climate has oscillated between hot and cold. Today our world is relatively cool, resting between ice ages. Variations in planetary orbit, solar output, and volcanic eruptions all change Earth’s temperature. Since the Industrial Revolution, however, humans have probably warmed the world faster than nature has. The greenhouse gases we generate by burning fossil fuels and forests will raise the average global temperature 2 to 12 degrees Fahrenheit (1 to 6 degrees Celsius) this century, the Intergovernmental Panel on Climate Change (IPCCExternal link) estimates.

The team ran simulations on the Cray X1E supercomputer “ Phoenix” and the Cray XT system “Jaguar” at the Oak Ridge Leadership Computing Facility. The scientists used nearly a million processor hours in 2008 to run one-third of their simulation, from 21,000 years ago - the most recent glacial maximum - to 14,000 years ago - the planet’s most recent major period of natural global warming. With 4 million INCITE processor hours allocated on Jaguar for 2009, 2010, and 2011, they will complete the simulation, capturing climate from 14,000 years ago to the present and projecting it 200 years into the future. This research is funded by the Office of Biological and Environmental Research within DOE’s Office of Science and by the National Science Foundation through its paleoclimate program and support of NCAR.

For more information select this linkExternal link

Berkeley Lab Staff Co-Author Hottest Article in Journal

ScienceDirect has compiled the Top 25 Hottest ArticlesExternal link (most downloaded) for the January to March issues of the journal Parallel Computing, and number one on the list is Optimization of sparse matrix–vector multiplication on emerging multicore platforms,”External link co-authored by five researchers at Lawrence Berkeley National Laboratory (LBNL): Samuel Williams (Computational Research Division [CRD]), Leonid Oliker (CRD/NERSC), Richard Vuduc (Georgia Tech), John Shalf (NERSC), Kathy Yelick (NERSC), and Jim Demmel (CRD).

In their abstract, the authors wrote, “We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix–vector multiply (SpMV) — one of the most heavily used kernels in scientific computing — across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.”

Oliker also co-authored the number five article, Revolutionary technologies for acceleration of emerging petascale applications.”External link

Contact. Jon Bashor, jbashor@lbl.gov
LANL’s New Monotone Finite Volume Method for Advection Diffusion Problems

LANL researchers D.Svyatskiy and K.Lipnikov in collaboration with Yu.Vassilevski (Institute of Numerical Mathematics, Russia) have developed a new finite volume method for advection-diffusion problems. The ideal finite volume method should be locally conservative, satisfy the discrete maximum principle, be at least second-order accurate on smooth solutions, result in a sparse system with the minimal number of non-zero unknowns, and be robust for unstructured polygonal meshes and tensorial media properties. The developed method has all of the above properties except the discrete maximum principle. Instead, the method preserves positivity of continuum solutions in both anisotropic diffusion dominated and advection dominated regimes, which is a major advance over capabilities of existing linear schemes. Solution positivity, crucial for a contaminant transport in porous media, is achieved by using a nonlinear two-point flux approximation method.

The new method was presented at the 2009 SIAM Conference on Mathematical and Computational Issues in the Geosciences held in Leipzig, Germany. The mini-symposium on monotone methods for advection-diffusion problems organized by D.Svyatskiy and Yu.Vassilevski attracted a great interest among academic and industrial participants. A manuscript describing details of the method was submitted to the Journal of Computational Physics. This work was done as part of ASCR Applied Mathematics Research Project, “Mimetic Finite Difference Methods for Partial Differential Equations.”

Konstantin Lipnikov, lipnikov@lanl.gov
Daniil Svyatskiy, dasvyat@lanl.gov
Mikhail Shashkov, shashkov@lanl.gov
TAU Delivers Scalable Performance for the Magnetohydrodynamics Code MHD Turb

Magnetohydrodynamics (MHD) is used to study large-scale plasma turbulence in the solar wind, accretion disks and the interstellar medium. MHD researchers at the University of Wisconsin and the University of Chicago require the high-end performance of leadership-class systems to achieve their science objectives. They have developed the MHD Turb application and are targeting it for large-scale execution on over 100,000 processing cores. Unfortunately, significant inefficiencies due to application code, library and communication interactions have been observed for smaller numbers of processors that looked to prevent MHD Turb performance goals at higher parallelism. The TAU performance system has been able to expose the most serious execution bottlenecks with 8,192-core runs of MHD Turb, allowing developers to see how key MHD Turb routines are degraded and to focus changes in library usage to improve their performance. With TAU, the performance in these routines was improved by over 200 percent, and total MHD Turb execution was reduced by 46 percent. More importantly, the MHD Turb developers are now optimistic that the application will achieve its scaling goals.

Contact. Allen D. Malony, malony@cs.uoregon.edu
PNNL Research on HPC CCA Event Service Published in Computational Journal

An article by Pacific Northwest National Laboratory (PNNL) scientists Ian Gorton, Daniel Chavarria and the late Jarek Nieplocha on the “Design and implementation of a high-performance CCA event service” was published in June 2009 in the journal Concurrency and Computation: Practice and Experience. In this paper, the researchers describe their implementation, experimental evaluation, and initial experience with a high-performance Common Component ArchitectureExternal link (CCA) event service that exploits efficient communications mechanisms commonly used on HPC platforms. The research is part of the Center for Component Technology for Terascale Simulation Software (CCTTSS) project. The CCTTSS, funded by DOE as an Integrated Software Infrastructure Center (ISIC) under the SciDAC program, is dedicated to the development of a component-based software development model suitable for the needs of high-performance scientific simulation, particularly the CCA. The center includes members from Argonne, Livermore, Los Alamos, Oak Ridge, Pacific Northwest, and Sandia national laboratories, Indiana University and the University of Utah.

Contact. Ian Gorton, ian.gorton@pnl.gov
NERSC Helps Scientists Develop Fast Way to Determine Protein Structure

Scientists at Lawrence Berkeley National Laboratory have developed a fast and efficient way to determine the structure of proteins, shortening a process that often takes years into a matter of days. The high-throughput protein pipeline could allow scientists to expedite the development of biofuels, decipher how extremophiles thrive in conditions that kill most organisms, and better understand how proteins carry out life’s vital functions. Their work is published in the July 20 online edition of the journal Nature Methods.

The team developed the protein pipeline at the Advanced Light Source (ALS), a national user facility located at Berkeley Lab that generates intense light for scientific research. At a beamline called SIBYLS, they used a technique called small angle x-ray scattering (SAXS), which can image a protein in its natural state, such as in a solution, and at a spatial resolution of about 10 angstroms, which is small enough to determine a protein’s three-dimensional shape. To maximize speed, Hura installed a robot that automatically pipettes protein samples into position so they can be analyzed by x-ray scattering. And to analyze the resulting data, they used the supercomputing resources of the U.S. Department of Energy’s National Energy Research Scientific Computing Center (NERSC), which is based at Berkeley Lab. The supercomputer’s clusters can churn through data for 20 proteins per week, or more than 1000 macromolecules per year.

More information can be found at: http://newscenter.lbl.gov/press-releases/2009/07/20/fast-protein-structures/External link

Fusion Gets Faster at OLCF

Few codes require faster I/O or scale better than today’s fusion particle codes. GTC and XGC-1, for instance, are running on more than 120,000 cores on the Oak Ridge Leadership Computing Facility’s (OLCF’s) Jaguar Cray XT5 supercomputer. “These are the largest runs with the largest datasets,” said Scott Klasky of the OLCF and the SciDAC Scientific Data Management Center, “and they are at the extreme bleeding edge of scalability and I/O.” Thanks to Klasky and a diverse team of collaborators, GTC recently became twice as fast. This number, said Klasky, was not reached only for an ideal benchmark case but for an actual production simulation. This impressive performance is the result of cross-discipline collaborations that have led to significant software and middleware improvements.

These advances are the result of software enhancements by Cray Inc. and a combined team effort of physicists (Y. Xiao and Z. Lin of the University of California–Irvine and S. Ethier of Princeton Plasma Physics Laboratory), vendors (N. Wichmann of Cray and M. Booth of Sun Microsystems), and computational scientists (S. Hodson, S. Klasky, Q. Liu, and N. Podhorszki of Oak Ridge National Laboratory [ORNL]; H. Abbasi, J. Lofstead, K. Schwan, M. Wolf, and F. Zheng of Georgia Tech; and C. Docan and M. Parashar of Rutgers). The various technical improvements include a new Cray compiler, optimizations to the code itself, and further I/O enhancements to ADIOS, an I/O middleware package created by Klasky and collaborators at Georgia Tech and Rutgers.

Contact. Jayson Hines, hinesjb@ornl.gov
ORNL Researchers Use GPUs to Accelerate Performance of Simulation Code

In a recent article in the journal “Parallel Computing,” ORNL researchers Jeremy S. Meredith, Gonzalo Alvareza, Thomas A. Maier, Thomas C. Schulthess and Jeffrey S. Vetter show how they have accelerated the Quantum Monte Carlo simulation code named DCA++ using graphics processing units (GPUs) as general-purpose computational devices (also known as GP-GPUs). While initially designed for real time rendering, their high performance and relatively low cost make GPUs a desirable target for scientific computation. Recent efforts in the community have been addressing the programming challenges, with new languages such as CUDA and OpenCL being widely adopted. However, the original task of GPUs - rendering - has traditionally kept accuracy as a secondary goal, and sacrifices have sometimes been made as a result. In fact, much deployed GPU hardware is only capable of single precision arithmetic, and even this accuracy is not always equivalent to that of a commodity CPU.

In this paper, the team investigated the accuracy and performance characteristics of GPUs on DCA++, including results from a preproduction double precision-capable GPU. They then accelerated the full DCA++ application, while concurrently investigating its tolerance to the different levels of arithmetic precision available in GPUs. The results show that while DCA++ has some sensitivity to the arithmetic precision, the single-precision GPU results were comparable to single-precision CPU results. Acceleration of the code on a fully GPU-enabled cluster showed that any remaining inaccuracy in GPU precision was negligible. Sufficient accuracy was retained for scientifically meaningful results while still showing significant speedups; the full parallel runtimes on the GPU cluster were five times faster than that on commodity microprocessors alone. Read the paper at this link: http://dx.doi.org/10.1016/j.parco.2008.12.004External link



PNNL Subsurface Expert Tim Scheibe Garners Distinguished Lecturer Award

Tim Scheibe, a senior scientist at Pacific Northwest National Laboratory, has been selected as the 2010 Henry Darcy Distinguished Lecturer in Ground Water Science. He was invited by the National Ground Water Research and Educational Foundation to spend next year lecturing at colleges and universities to educate and create interest in groundwater science and technology. The lecture series has reached more than 70,000 students, faculty members, and professionals since 1987. Scheibe is currently working on simulating flow, transport and biogeochemical processes in the Hanford Site subsurface related to contaminant transport in groundwater systems. He is involved in ASCR funded subsurface projects to develop a hybrid multi-scale subsurface reactive transport modeling framework to integrate models with diverse representations of physics, chemistry and biology at different scales.

Contact. Sue Chin, sue.chin@pnl.gov
Jeff Broughton Named as New Head of NERSC Systems Department

Jeffrey M. Broughton, who has 30 years of HPC and management experience, has accepted the position of Systems Department Head at the Department of Energy’s National Energy Research Scientific Computing Center. Broughton, who most recently served as senior director of engineering at QLogic Corp., joins NERSC on Monday, August 3.

Broughton’s career includes nine years at Lawrence Livermore National Laboratory, where he served as both a project leader and a group leader in computing. He also spent ten years at Amdahl Corporation, where he worked in both computer architecture development and marketing. During a two-year stint at Sun Microsystems, he was awarded five system architecture patents and played a key role in developing a massively parallel system architecture for Sun. Broughton was recruited by the startup firm PathScale Inc. in 2001 and helped build an organization of 50 employees to develop cluster computer systems. In 2005, he won the HPCWire “Most Significant New HPC Software Product for 2005” for delivering a commercially viable compiler based on open source technology. In 2006, PathScale was acquired by QLogic, and Broughton continued to lead the hardware and software organization for InfiniBand-related products.

Contact. Jon Bashor, jbashor@lbl.gov
LANL’s Cory Hauck Presents Work on New Kinetic Closures

Cory Hauck, a member of the Computational Physics and Methods Group at Los Alamos, has given several invited talks on a new optimization-based closure for linear kinetic equations. These include presentations at Oak Ridge National Laboratory and the University of South Carolina Mathematics Department in February and at the  Institute for Pure and Applied Mathematics in March. This work was also presented by a coauthor (Ryan McClarren, Texas A&M) at the SIAM Annual Meeting in July. Hauck also gave an invited talk on entropy-based closures in gas dynamics at the IPAM workshop on the Boltzmann Equation in April. This work was done as part of ASCR Applied Mathematics Research Project, “Mimetic Finite Difference Methods for Partial Differential Equations.”

Contact. Cory D. Hauck, cdhauck@lanl.gov or
Mikhail Shashkov, shashkov@lanl.gov
LLNL’s Lori Diachin Co-Chairs the SIAM 2009 Annual Meeting

SIAM’s annual meeting provides a broad view of the state of the art in applied mathematics, computational science, and their applications through invited presentation, prize lectures, minisymposia, and contributed papers and posters. This year the topic areas spanned the themes of computational science and engineering, mathematical modeling of the life sciences, optimization, supercomputing, nonlinear waves, and multiscale math, among others. Over 900 mathematicians from around the world attended the meeting this year, which was held in Denver on July 6-10, 2009. Invited speakers included DOE-funded researchers Lois McInnes (ANL), Juan Meza (LBNL), Cindy Phillips (SNL), and Russ Caflisch (UCLA).

For more information, see the following link: http://www.siam.org/meetings/an09/External link

NERSC’s Kathy Yelick Gives Talk on Multicore Software Challenges

Kathy Yelick, director of NERSC and an expert in programming languages, was one of two invited speakers in the well-attended session on “Multicore: More Moore or Multi Trouble” at the International Supercomputing Conference held June 23-26 in Hamburg, Germany. While Yelick addressed software issues, Prof. Yale Pratt of the University of Texas at Austin gave his perspective on hardware issues. The session was chaired by Erich Strohmaier, head of Berkeley Lab’s Future Technologies Group. Read more at this linkExternal link

Green HPC: LBNL’s Horst Simon Helps Sift through the Hype

Green HPC: a look beyond the hypeExternal link is a new, eight-part podcast series from insideHPC that examines green initiatives from all sides of the HPC ecosystem. In episode 1, “Sifting through the hype,”External link LBNL Associate Laboratory Director Horst Simon joins Wu-chun Feng of the Green500, Wilf Pinfold of Intel, and Dan Reed of Microsoft Research to discuss how the Green HPC conversation has evolved and why energy consumption is an issue everyone should be concerned about.



NERSC’s Franklin Supercomputer Upgraded to Double Its Scientific Capability

The National Energy Research Scientific Computing (NERSC) Center has officially accepted a series of upgrades to its Cray XT4 supercomputer, providing the facility’s 3,000 users with twice as many processor cores and an expanded file system for scientific research. NERSC is located at Lawrence Berkeley National Laboratory.

In a strategic effort to maintain scientific productivity, the upgrades were implemented in phases. The quad-core processor and memory upgrade was done by partitioning Franklin and performing an upgrade and test on one part while the rest of the system was available to users. A later upgrade increased the file system capacity to 460 terabytes, and the speed at which data is moved in and out of the system increased threefold. As a result of these upgrade efforts, the amount of available computing time roughly doubled for scientists studying everything from global climate change to atomic nuclei. The final Franklin system has a theoretical peak performance of 355 teraflop/s, three and half times that of the original system. The increase in peak performance comes from doubling of cores, doubling the number of floating-point operations per clock cycle, and a modest drop in the clock rate. With the upgrade, Franklin was ranked number 11 on the latest edition of the TOP500 list of the world’s top supercomputers. Read more at this linkExternal link

Spider File System Spinning Connections to All ORNL Platforms

Spider, the world’s biggest Lustre-based, centerwide file system, has been fully tested to support the Oak Ridge Leadership Computing Facility’s petascale Cray Jaguar supercomputer and is now offering early access to scientists. With 10.7 petabytes of disk space and the ability to move data at more than 240 gigabytes a second, Spider becomes the only file system for all major simulation platforms at ORNL, and is both capable of providing peak performance and globally accessible.

Ultimately, Spider will connect to all of ORNL’s existing and future supercomputing platforms as well as off-site platforms across the country via GridFTP (a protocol that transports large data files), making data files accessible from any site in the system. To date, Spider has demonstrated stability on the XT5 and XT4 partitions of Jaguar, on Smoky (the center’s development cluster), and on Lens (the center’s visualization and data analysis cluster), with more than 26,000 compute nodes (clients) mounting the file system and performing I/O.

Read more at this linkExternal link
OLCF’s Jaguar Remains Fastest Computer for Open Science

Jaguar, the OLCF’s Cray XT5, remains the world’s fastest supercomputer for unclassified research, according to the 33rd edition of the TOP500 list released June 23 at the International Supercomputing Conference in Hamburg, Germany. The TOP500 list named four machines at the ORNL computing complex among the world’s 25 fastest. All told, five Oak Ridge machines made the list. Twice a year, the TOP500 list (www.top500.orgExternal link) ranks high-performance computing systems on their speed in running High-Performance Linpack, a software code that solves a dense matrix of linear algebra equations. Only two machines have reached calculating speeds exceeding the petaflops range of a quadrillion floating point operations per second. With a peak speed of 1.382 petaflops, Jaguar XT5 ranked No. 2 on the TOP500 list. The XT5 is part of a larger Cray system, also called Jaguar, that includes a 263-teraflops (trillion floating point operations per second) XT4 component that ranked number 12.

Another Oak Ridge machine making the TOP500 is Kraken, an XT5 component of a Cray system belonging to the National Institute for Computational Sciences and the University of Tennessee. Kraken XT5 ranked No. 6 to become the world’s fastest academic machine. Additional Oak Ridge machines to make the list are the NICS-UT XT4 component, called Athena and ranked number 21, and ORNL’s Eugene, an IBM Blue Gene/P system that ranked number 247.

Contact. Jayson Hines, hinesjb@ornl.gov
Increase in IO Bandwidth to Enhance Future Understanding of Climate Change

Researchers at Pacific Northwest National Laboratory, working in a Science Application Partnership funded under DOE’s SciDAC program, in collaboration with researchers at NERSC, Argonne, and Cray, recently achieved an effective aggregate IO bandwidth of 5 Gigabytes/sec for writing output from a global atmospheric model to shared files on DOE’s Franklin supercomputer, located at the NERSC facility. This is an important milestone in the development of a high-performance Global Cloud Resolving Model (GCRM) code being written at Colorado State University under a project led by Professor David Randall. This bandwidth number represents the minimum value required to write data fast enough that IO does not constitute a significant overhead to running the GCRM model. It also represents a significant fraction of the available bandwidth on Franklin and is a good indication that much higher values can be achieved on ORNL’s Jaguar computer, which has more available IO bandwidth. Jaguar is targeted as a major platform for running the GCRM. Higher performance will allow researchers to run at higher resolution, and therefore achieve higher accuracy, and will also enable simulations representing longer periods of time, both of which are crucial to understanding future climate change. These high IO rates were achieved while still writing to shared files using a data format that is common in the climate modeling community. This will enable many other researchers to make use of this data.

Karen Schuchardt, karen.schuchardt@pnl.gov or
Bruce Pallmer, bruce.pallmer@pnl.gov


Argonne’s World-Class Computing Capability Showcased at the CSGF Annual Conference

DOE Computational Science Graduate fellows are part of an innovative group learning to solve problems outside traditional boundaries. An annual conference makes it possible for these leaders to get together, share ideas, support one another, and discover the research opportunities at DOE laboratories.

This year’s Computational Science Graduate Fellowship (CSGF) Conference took place on July 14-16 in Washington, DC (http://www2.krellinst.org/csgf/conf/2009/External link). Ray Bair, Chief Computational Scientist at Argonne National Laboratory, showcased Argonne’s computational science research efforts in biological and environmental science, astrophysics, advanced software, energy storage, nanoscience, nuclear energy, materials science and transportation at the DOE Laboratory Poster Session. Others from Argonne who participated in the poster session included Lois Curfman McInnes, Computational Scientist, Mathematics and Computer Science Division (MCS); Paul Messina, Director of Science, Argonne Leadership Computing Facility (ALCF); Ray Loy, Senior Software Developer, MCS and Scientific Applications Engineer, ALCF; Mihai Anitescu, Computational Mathematician, MCS; and Stefan Wild, Postdoctoral Appointee, MCS and former CSGF Fellow. Jeff Hammond, Argonne Scholar, DPF at the ALCF and former CSGF Fellow, presented “Developing polarizable force fields from ab initio calculations and the role of quantum chemical benchmarking in the age of petascale” at the Fellows Poster Session.

Argonne staff also alerted attendees to employment opportunities for postdocs at the lab. Argonne has been ranked as the 13th Best Place for Postdocs to Work by The Scientist magazine.

Contact. Ray Blair, rbair@anl.gov
FLASH Center Researchers Learning More about Chombo AMR

Members of the ASC Flash Center team (University of Chicago’s Center for Astrophysical Thermonuclear FlashesExternal link) met with members of ANAG (Applied Numerical Algorithms Group) at Berkeley Lab on July 20–22. The FLASH Center team is trying to better understand how compact stars like white dwarfs explode into supernovae. In their process of trying to replicate the explosion in simulations, they discussed how to use their code with ChomboExternal link, ANAG’s application for adaptive mesh refinement. The FLASH team’s collaboration with Berkeley Lab dates back to 2004, when they ran 2D simulations on NERSC supercomputers, and more recently in 2007, when they ran 3D simulations of star explosions.

ESnet and Internet2 Hold Summer Joint Techs Meeting

The Summer 2009 ESnet/Internet2 Joint Techs meeting and ESnet Site Coordinating Committee (ESCC) meeting was held at Indiana University in Indianapolis July 19-23. This international conference of networking engineers focused on security, campus networking, and regional optical network to campus issues, and included several tutorials and an IPv6 Challenge Demo.

Several ESnet staff gave presentations at the ESCC meeting:

  • Steve Cotter: ESnet Update
  • Kevin Oberman: DNSsec Deployment Guidance Document
  • Joe Metzger: perfSONAR Deployment Guidelines
  • Evangelos Chaniotakis: OSCARs & SDN How-to
  • Joe Burrescia: Revised ESnet Services Document
NERSC Hosts Workshop on HPSS at the Extreme Scale

On July 14–15, NERSC’s Mass Storage Group (MSG) hosted a DOE workshop on HPSS at the Extreme Scale, with the executive and technical committees of the HPSS collaboration participating. The goal was to forecast how HPSS needs to evolve to meet the needs of DOE researchers in 2018, which is expected to be the dawn of the extreme-scale computing era.

MSG Group Lead Jason Hick, Harry Hulen from IBM, and Dick Watson from Lawrence Livermore National Lab prepared a 51-page whitepaper to get the discussion started. The whitepaper contained input from DOE labs currently using HPSS, industry storage hardware trends, and an independent hierarchical storage management (HSM) software market survey. A report on the workshop’s findings will be forthcoming.

LBNL’s Tony Drummond Gives Keynote at CMMSE’09

Tony Drummond of Berkeley Lab’s Scientific Computing Group gave a keynote talk at the 9th International Conference on Computational and Mathematical Methods in Science and Engineering (CMMSE’09)External link, held from June 30 to July 3, 2009 in Gijón, Asturias, Spain. The title of his talk was “An approach to sustainable exascale computing.” Drummond also spoke in one of the sessions on “Evaluation of two parallel high performance numerical schemes applied to the solution of two-group neutron diffusion equation,” co-authored with O. Flores, V. Vidal and G. Verdu. In his talk, Drummond presented results from his collaborations with the Polytechnic University of Valencia departments of Nuclear Engineering and High Performance Computing.

Contact. Jayson Hines, hinesjb@ornl.gov
Berkeley Lab Hosts 25 Summer Students from UC Berkeley Outreach Program

Juan Meza and Cecilia Aragon of LBNL’s Computational Research Division gave introductory presentations on career opportunities at DOE labs to 25 students spending the summer at UC Berkeley. The students spent eight weeks at Cal under the Summer Undergraduate Program in Engineering Research at Berkeley (SUPERB), part of the UC Berkeley College of Engineering Center for Underrepresented Engineering Students. After hearing from the computational scientists, the students toured the Advanced Light Source.

“Juan and Cecilia’s talks were sensational and the two scientists, Ina and Tom, who gave the ALS tour were very articulate and engaging,” wrote Sheila Humphreys, director of diversity for the Department of Electrical Engineering and Computer Sciences. “Just a very quick word to convey our thanks, and to say it was, from our end, a HUGE success and we are very grateful.”

PNNL Researcher Discusses Need for Energy-Efficient HPCs in Webinar

Andres Marquez, technical manager of the Energy Smart Data Center at Pacific Northwest National Laboratory, was one of the expert panelists featured in a webinar on energy efficiency in a session titled “Pumping Up Power Efficiency." The June 18 webinar was sponsored by Scientific Computing magazine and focused on the increasing need for energy efficient high performance computing systems. Today’s researchers and engineers demand more efficient computing platforms that make better use of the energy used without sacrificing the processing speed they require to be competitive. New data center technologies will reduce power consumption per resource used and will improve power provisioning and thermal management by novel monitoring and control capabilities. “Pumping Up Power Efficiency” is now available on demand from either this link http://www.advantagebusinessmedia.com/ims/SC/energy/default.htmExternal link or the following link http://www.scientificcomputing.com/webcast-Pumping-Up-Power-Efficiency-060209.aspxExternal link. You will be asked to register.

Contact. Andres Marquez, andres.marquez@pnl.gov








Last modified: 3/18/2013 10:12:43 AM