ASCR Monthly Computing News Report - February 2010

In this issue...


INICTE Program Awards Supercomputing Time at Argonne, Oak Ridge to 69 Projects
Sixty-nine research projects have been allocated a total of approximately 1.6 billion supercomputing processor hours through the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The INCITE program provides powerful resources to enable scientists and engineers to conduct cutting-edge research in just weeks or months rather than the years or decades needed previously. This facilitates scientific breakthroughs in areas such as climate change, alternative energy, life sciences and materials science.
The 69 projects selected, based on peer review and computational readiness evaluations of their potential to advance scientific discovery, were awarded time at DOE’s Leadership Computing Facilities at Argonne National Laboratory in Illinois and Oak Ridge National Laboratory in Tennessee.  Of the 69 projects selected, 45 received allocations on the Oak Ridge system and 35 were allocated time at Argonne (a small number of projects received time on both systems).
NERSC Provides a Computational Science Approach for Analyzing Culture
Inspired by scientists who have long used computers to transform simulations and experimental data into multi-dimensional models that can then be dissected and analyzed, cultural analytics applies similar techniques to cultural data. With an allocation on the Department of Energy's National Energy Research Scientific Computing Center's (NERSC) supercomputers and help from the facility's analytics team, researchers from the University of California, San Diego (UCSD) recently illustrated changing trends in media and design across the 20th and 21st centuries via Time magazine coversExternal link and Google logos.External link
The explosive growth of cultural content on the web, including social media together with digitization efforts by museums, libraries and companies, make possible a fundamentally new paradigm for the study of both contemporary and historical cultures, according to Lev Manovich, Director of the Software Studies Initiative at UCSD.
Manovich's research, called “Visualizing Patterns in Databases of Cultural Images and Video,” is one of three projects currently participating in the Humanities High Performance Computing Program, an initiative that gives humanities researchers access to some of the world’s most powerful supercomputers, typically reserved for cutting-edge scientific research. The program was established in 2008 as a unique collaboration between DOE and the National Endowment for the Humanities. Read more at the following link:
LLNL Releases Version 2.0 of the WPP Wave Propagation Software
Anders Petersson, Bjorn Sjogreen and the rest of the WPP team at Lawrence Livermore National Laboratory have released version 2.0 of the wave propagation simulations software. WPP implements substantial capabilities for 3D seismic modeling, with a free surface condition on the top boundary, non-reflecting far-field boundary conditions on the other boundaries, point force and point moment tensor source terms with many predefined time dependencies and a fully 3D heterogeneous material model specification. Significant advances in version 2.0 of the software include the use of free surface boundary conditions on curved topographies and the use of Cartesian local mesh refinement near the free surface, where more resolution often is needed to resolve short wave lengths in the solution, for example in sedimentary basins. The foundational mathematics used to develop the high order discretization schemes in WPP is supported by ASCR's applied math research program. The software has been used to simulate seismic events in Nevada, the Great San Francisco Earthquake of 1906, hydro-elastic coupling problems and electro-magnetic wave propagation problems. More information, including an 84 page reference guide and many examples, is available at the following web site.
OLCF Releases Highlights from First Petascale Research Conducted on Jaguar
“Science at the Petascale 2009: Pioneering Applications Point the Way,” a document detailing the first ultrascale research conducted on the Cray XT5 known as Jaguar at the Oak Ridge Leadership Computing Faciltiy (OLCF), is now available online.
In the first half of 2009, the OLCF invited 28 of the world’s leading computational research teams to participate in a six-month program of early petascale research on the newly upgraded Jaguar. Using more than 360 million combined processor hours, these research teams tackled some of the most pressing issues in climate science, chemistry, materials science, nuclear energy, physics, bioenergy, astrophysics, geosciences, fusion and combustion. The OLCF’s primary goals for this phase of petascale early science were to not only deliver pioneering science results, but also to engage a broad community of users capable of hardening the nascent system for the nearly 40 projects allocated time on Jaguar in 2009 by the INCITE program. Several research teams using Jaguar during this early petascale phase ran the largest calculations ever performed in their field and three codes achieved sustained production performance of over one petaflop.
“Science at the Petascale 2009” highlights 20 of the 28 research teams that participated in the early petascale program. To view the document, select the following link.
ORNL Group Uses New Tool to Improve Application Performance
Collin McCurdy and Jeffrey Vetter — members of Oak Ridge National Laboratory’s (ORNL’s) Future Technologies group — have recently developed Memphis, a tool that analyzes memory access patterns in scientific applications on non-uniform memory access (NUMA) architectures.
The authors have been using Memphis to find and fix performance problems in several major DOE applications. These improvements have, so far, led to performance increases on the Cray XT5 at Oak Ridge of 23 percent for runs at scale of XGC1 and of 24 percent and 13 percent for single-node runs of CAM and HYCOM, respectively. The results will be published in April at the 2010 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS <http://ispass.org/ispass2010/External link> ) as“Memphis: Finding and Fixing NUMA-related Performance Problems on Multi-core Platforms.”
Not surprisingly, high-end scientific applications have been immune to NUMA problems due to the uniform latency of memory accesses offered by earlier symmetric multiprocessing (SMP) platforms. However, current trends in microprocessor design, including on-chip memory controllers and multi-core processing, are pushing NUMA issues into small-scale systems. Several platforms, such as the AMD Istanbul and Intel Westmere, are currently NUMA between sockets and upcoming processors will be NUMA within a socket. Memphis uses hardware performance monitoring to pinpoint memory accesses to data arrays that cause NUMA-related performance problems. The team is continuing to analyze and optimize other applications for NUMA performance problems. The paper is available at.
Historic Sudbury Neutrino Observatory Data, Carried by ESnet, Lives on at NERSC
Tunneled 6,800 feet underground in Canada's Vale Inco Creighton mine, the Sudbury Neutrino Observatory (SNO) was designed to detect neutrinos produced by fusion reactions in the Sun. Although the observatory officially "switched off" in August 2006, a copy of all the data generated for and by the experiment will live on at NERSC.
“The Department of Energy invested a lot of resources into SNO and we believe that preserving these datasets at NERSC will afford the best protection of the agency's investment,” said Alan Poon, a member of the SNO collaboration at Berkeley Lab.
According to Poon, the SNO experiment has made tremendous contributions to our understanding of neutrinos, invisible elementary particles that permeate the cosmos. Before the observatory started searching for solar neutrinos on Earth, all experiments up to that point detected only a fraction of the particles predicted to exist by detailed theories of energy production in the Sun. Results from the SNO experiment eventually revealed that the total number of neutrinos produced in the Sun is just as predicted by solar models, but the neutrinos are oscillating in transit, changing in type or "flavor" from electron neutrinos (the flavor produced in the Sun) to muon or tau neutrinos. In 2001, Science Magazine identified SNO’s solution to the solar neutrino mystery as one of their 10 science breakthroughs of the year. Read more at the following link.


LBNL Staff Active at SIAM Conference on Parallel Processing
More than two dozen staff members from Berkeley Lab’s Computational Research Division and NERSC contributed to the SIAM Conference on Parallel Processing for Scientific Computing (PP10)External link held February 24–26, in Seattle. LBNL staff served as co-authors on more than 15 papers, participated in three mini-symposia, gave one plenary talk and presented in four other sessions. Several staff members also served as organizers. To read the full list of LBNL contributors, select the following link.
Argonne’s Pavan Balaji Co-Edits Special Issue of Computer Magazine
Pavan Balaji, an assistant computer scientist in Argonne National Laboratory’s Mathematics and Computer Science Division and his colleague Wu-chun Feng of Virginia Tech, are guest editors of a special issue of the December/January issue of the IEEE magazine Computer. Titled “Tools and Environments for Multicore and Many-Core Architectures,” the issue includes four articles that discuss emerging technology – including innovative load-balancing techniques, control engineering, source-to-source compilers and programming with managed memory hierarchies – that is enabling users to take advantage of emerging multicore and many-core architectures.
Computer, the flagship publication of the IEEE Society, presents peer-reviewed information about research, trends, best practices and changes in computer science.
Contact: Gail Pieper, pieper@mcs.anl.gov
Sandia’s Karen Devine Chairs Committee for Women in Mathematics
Karen Devine of Sandia National Laboratories is chairing the Association for Women in Mathematics’ SIAM Workshop committee this year. Committee members are Carol Woodward (LLNL), Cammey Cole Manning (Meredith U.) and Andrea Bertozzi (UCLA). Together, they are organizing the AWM’s workshop at the 2010 SIAM Annual Meeting to be held in July. The AWM-SIAM committee selects outstanding female graduate students and post-docs to present their work at the SIAM Annual Meeting. All participants attend a luncheon at which they are partnered with a mentor from the mathematics community. The workshop also includes a career development minisymposium and panel session, with female leaders in the mathematics community describing their career experiences. The theme for this year’s career development minisymposium is “Success through Transitions.” The AWM-SIAM workshop is funded by DOE’s Office of Advanced Scientific Computing Research and the Computational Analysis Program at the Office of Naval Research.
Contact: Scott Collis, sscoll@sandia.gov
Berkeley Lab’s Juan Meza Selected as IEEE Computer Society Distinguished Visitor
Juan Meza of Berkeley Lab’s Computational Research Division has been selected by the IEEE Computer Society to participate in the Distinguished Visitors Program.  Initiated in 1971, the Distinguished Visitors Program provides “first quality speakers serving IEEE Computer Society professional and student chapters.” Among the criteria for selection is the condition that participants be recognized authorities in their respective field. Meza, one of 20 participants in the North American program, will serve a 3-year term beginning in January 2010. More info on this program can be found at the following link:


NERSC Accepts First Phase of New Petaflop/s System
After several months of rigorous scientific testing, NERSC has accepted a 5,312-core Cray XT5 machine, called Hopper (Phase 1). Following NERSC’s tradition of naming systems for scientists, Hopper is named for Rear Admiral Grace Hopper, a pioneering computer scientist.
Innovatively built with external login nodes and an external filesystem, Hopper Phase 1 will help NERSC staff optimize the external node architecture before the second phase of the Hopper system arrives. Phase 2 will be a petascale system comprising 150,000 processor cores and built on next-generation Cray technology. Before accepting the Phase 1 Hopper system, NERSC staff encouraged all 300 science projects computing at NERSC to use the system during the pre-production period to see whether it could withstand the gamut of scientific demands that are typically run at NERSC. Read more at the following link:.
ESnet Installs First Equipment toward Providing 100 Gbs Testbed
The Energy Sciences Network (ESnet) has completed of the first phase of installation of their new Dense Wavelength Division Multiplexing (DWDM) equipment from Infinera for its Recovery Act-funded Advanced Network Initiative testbed. This first phase will be initially located at LBNL and utilize the Infinera hardware as a layer 1-3 tabletop testbed capable of running 10 gigabits per second (Gbs) and eventually 40 Gbs and 100 Gbs, circuits.
The ANI testbed is being built as a community resource for innovation in network research. DWDM or dense wavelength-division multiplexing, refers to optical networking systems that can send large volumes of data over multiple wavelengths of light on a single fiber pair. ESnet chose Infinera’s DWDM equipment because it provides layer 1 VPN capability, enabling ESnet to simultaneously run multiple networks and isolate traffic from the different research activities that will be conducted on the tabletop testbed. ESnet is seeking additional industry collaborations to extend the infrastructure and capabilities of the ANI tabletop testbed.
Contact: Wendy Tsabba, wtsabba@lbl.gov


Final Grand Challenges Workshop Focuses on Cross-Cutting Exascale Technologies
In early February, David Brown (LLNL) and Paul Messina (ANL) co-chaired the final in the series of Scientific Grand Challenges Workshops, this one focused on “Cross-cutting Technologies for Computing at the Exascale.”External link The objective of this D.C. area workshop was to address “co-design” of system architecture, mathematical models and algorithms, system software and tools, programming models and algorithms and scientific application codes needed to enable scientific discovery at the exascale.
The workshop drew 148 participants, representing expertise in all of these cross-cutting areas. Following plenary presentations by Andrew White (LANL), Sudip Dosanjh (SNL), Phil Colella (LBNL), Rick Stevens (ANL), Jack Dongarra (U. Tenn.), Vivek Sarkar (Rice) and Brad Chamberlain (Cray), three sessions of six simultaneous breakout sessions mixed members from each of these communities in different ways in what was arguably a first experiment in “co-design.” A Letter Report identifying findings and priority research directions for future architecture and software development, mathematical models and algorithms, systems software and tools, programming models and environments and the co-design process itself will appear shortly. David Brown will also present the preliminary findings of the workshop at the next ASCAC meeting in late March. For more information on the ASCR Scientific Grand Challenges Workshop Series, select the following link:.
Workshop Focuses on Issues in Statistical and Topological Analysis of Petascale Data
Topological and statistical modeling methods are typically used separately to analyze a variety of scientific data. Topological methods can provide robust, combinatorial algorithms for detecting, segmenting and tracking local features such as critical points. Statistical modeling methods can provide sophisticated techniques to understand global properties of data.
While their strengths are complementary, both methods have their drawbacks, which are exacerbated by the size and complexity of petascale data. On January 18th, 2010, a group of computer scientists and mathematicians from Sandia National Lab, Texas A&M University, University of Utah and Lawrence Livermore National lab, convened in Livermore, CA to discuss methods for leveraging the orthogonal and complementary analysis provided by statistical and topological methods to address both the size and complexity of petascale data. This was the first in a series of quarterly workshops focusing on the design and implementation of new algorithms that will allow scientists to identify and characterize intermittent, non-local events that only occur in small space-time regions of a simulation; a hallmark of several DOE application domains including climate modeling, fusion, turbulent combustion and astrophysics simulations. The next workshop will be hosted in Utah and is scheduled for April 19, 2010.
Contact: Philippe Pébay, pppebay@sandia.gov
Computational Powerhouses Collaborate on Six-Core Workshop
Two ORNL-based computing facilities joined NERSC to sponsor a Cray XT5 workshop February 1-3 at the University of California, Berkeley.  Staff from the Department of Energy’s OLCF and the University of Tennessee’s National Institute for Computational Sciences (NICS) attended the Joint NERSC/OLCF/NICS Cray XT5 Workshop to train high-performance computing and scientific communities in the use of the world’s largest and most powerful leadership systems. This marks the first time the three computing facilities have collaborated to host a workshop.
The processing cores for Jaguar, the fastest supercomputer in the world and Kraken, the fastest academic machine, were upgraded in October from four to six new AMD Istanbul Opteron processor cores per node. This and other new hardware features present novel potential for applications. Staff and representatives from each facility and Cray were on site to give talks and provide assistance during hands-on sessions with the latest hardware, helping users effectively utilize the new XT5 architectures. Topics included programming effectively for the XT5 and proper use of the new six-core CPU architecture.
OLCF to Host Spring Hex-Core Workshop, User’s Meeting
On May 10-12, 2010, the OLCF will host a workshop aimed at educating returning, new and potential users on the recently upgraded Cray XT5 Jaguar system. The first day of the workshop will focus solely on the users who have been granted a combined total of 950 million CPU hours on Jaguar through the INCITE program. OLCF users will be introduced not only to the upgraded Jaguar system, but also to the various support groups available to them at OLCF. The OLCF users’ council will also meet on this day to elect a new chair. This elected council member will act as the voice of the users, presenting user views, opinions and ideas to OLCF management.
The following two days of the workshop will focus on familiarizing new, returning and potential users with upgraded Jaguar systems. The workshop will feature lectures from OLCF and Cray staff covering key topics like XT5 architecture, using debuggers on the XT5 and developing applications capable of scaling to 100,000 or more cores. Hands-on sessions throughout the workshop will allow participants to access Jaguar using their own codes and work one-on-one with staff members to resolve any issues. For more information on the upcoming user’s meeting and workshop, please see the following link.
ALCF Tour Sparks Girls’ Interest in Science and Engineering Careers
Sixteen middle-school girls from the Chicago area toured the Argonne Leadership Computing Facility (ALCF) and learned firsthand about career opportunities in science and engineering during the Ninth Annual “Introduce a Girl to Engineering Day,” held Feb. 18 at Argonne National Laboratory. The event provided a fun and educational way to introduce girls in sixth through eighth grade to engineering careers. Labwide, 52 students spent the day with a mentor, toured the laboratory, participated in hands-on activities and attended interactive presentations about engineering careers. They also had lunch with some of Argonne's leading experts. 
While the girls toured the ALCF, “They showed a keen interest in Intrepid, our Blue Gene/P system, and asked many insightful, challenging questions,” said Sreeranjani Ramprakash, ALCF Technical Support Specialist. Sreeranjani and Bowen Goletz, ALCF HPC Systems Administrator, provided a wealth of information about the supercomputer and how it operated, as well as discussing their own career paths and a typical day at work. 
Sponsorship for the event is provided by all of Argonne’s research areas in conjunction with Argonne's Division of Educational Programs and the Women in Science and Technology program. For more information about the program, visit the Introduce a Girl to Engineering Day website.








Last modified: 3/18/2013 10:12:35 AM