2008

April

ASCR Monthly Computing News Report - April 2008



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.

In this issue...
 
 
 
 

RESEARCH NEWS:

LBNL Researchers Propose New Architecture for 200 Petaflops Climate Computer
Three researchers from the Lawrence Berkeley National Laboratory have proposed an innovative way to improve global climate change predictions by using a supercomputer with low-power embedded microprocessors, an approach that would overcome limitations posed by today's conventional supercomputers. In a paper published in the May issue of the International Journal of High Performance Computing Applications, Michael Wehner, Lenny Oliker and John Shalf lay out the benefit of a new class of supercomputers for modeling climate conditions and understanding climate change. Using the embedded microprocessor technology used in cell phones, iPods and other consumer electronics, they propose designing a cost-effective machine for running these models and improving climate predictions.
 
Using the standard approach of building a supercomputer using conventional microprocessors to enable modeling cloud systems at a 1-km scale would cost about $1 billion. The system also would require 200 megawatts to operate, enough energy to power a small city of 100,000 residents. In their paper, "Towards Ultra-High Resolution Models of Climate and Weather," the researchers present a radical alternative that would cost less to build and require less electricity to operate. They conclude that a supercomputer using about 20 million embedded microprocessors would deliver the results and cost $75 million to construct. This "climate computer" would consume less than 4 megawatts of power and achieve a peak performance of 200 petaflops.
Contact: Ucilia Wang, uwang@lbl.gov
 
Improved Code Enhances Analysis of Data Generated by GCRM
Pacific Northwest National Laboratory (PNNL) researchers, in collaboration with the developers of the open-source NetCDF Operators (NCO), recently added capabilities for processing geodesic grid data and optimized performance to support efficient manipulation of data sets consisting of many files of tens to hundreds of gigabytes in size. The improved and optimized tools enhance the ability to analyze data generated by the Global Cloud Resolving Model (GCRM). The GCRM models the entire globe on a fine enough scale (2-4 km) to accurately model cloud behavior, which is widely agreed to be a major source of uncertainty in existing climate models. Results from the GCRM will be used as the basis for increasing the accuracy of climate models that can be run more efficiently at coarser resolutions to simulate longer periods of time. To reach this scale the model makes use of a geodesic grid. The grid has the desirable characteristic that each cell on the globe is roughly equal in size while having the same number of neighbors.
 
The Community Access to Global Cloud Resolving Model Data and Analyses project - a SciDAC Scientific Application Partnership - has enhanced the popular NCO data manipulation software to support data that is generated on the geodesic grid. Working with the NCO team, PNNL researchers identified a major performance bottleneck within the core NCO code that affected processing of large data sets. This code has been reworked, improving the performance by a factor of up to 500 times for certain operations. These capabilities are now being added to other tools within the NCO tool suite. The tools will be used to provide server-side data reduction.
Contact: Karen Schuchardt, Karen Schuchardt@pnl.gov
 
Big Machines at ORNL Model Small Wonders
Gerhard Klimeck, one of the nation's leading nanoelectronic researchers, sees great potential is using ORNL's Cray XT4 Jaguar. Klimeck's NEMO (nanoelectronic modeling) code has scaled extremely well on Jaguar. NEMO 3D was benchmarked on a range of HPC platforms, five of which ranked in the Top 500 List of 2007. An 8 million atom simulation scaled to over 8,000 cores ran better on Jaguar than other comparable systems, so well in fact that Klimeck used the data to bolster his case for a PetaApps award, an NSF program to identify research that is best suited to petascale computing architectures. His NEMO 1D code, when scaled to 23,000 cores, did in one hour what would have taken a serial machine 100 days.
 
Eventually, Klimeck will combine the capabilities of both NEMO 3D and NEMO 1D in a new code dubbed OMEN.  This code, said Klimeck, "will scale to 500,000 cores and more and tackle the very relevant problem of semiconductor device scaling. This theoretical and numerical problem makes a grand challenge for petascale computing." Besides Jaguar's stability, he cites the "professional people" as another benefit of utilizing the NCCS's facilities.
 
SVM-HUSTLE Outperforms Current Software for Protein Homologs Identification
Researchers at Pacific Northwest National Laboratory have developed an innovative software tool to detect homologs (proteins with similar function but dissimilar sequence) that significantly outperforms most current methods for remote homolog identification. "As the amount of biological sequence data continues to grow exponentially, we face the increasing challenge of assigning function to this enormous molecular 'parts list,'" said Anuj Shah, PNNL. Shah, along with PNNL's Christopher Oehmen and Bobbie-Jo Webb-Robertson, introduced SVM-HUSTLE (Support Vector Machine-based tool to detect Homology Using Semi-supervised iTerative LEarning) that identifies significantly more remote homologs than current state-of-the-art sequence- or cluster-based methods. Their research paper was recently published in BioinformaticsExternal link.
 
When compared against existing methods for identifying protein homologs (BLAST, PSI-BLAST, COMPASS, PROF_SIM, RANKPROP and their variants) on two different benchmark datasets, SVM-HUSTLE significantly outperforms each of the above methods. SVM-HUSTLE also yields results comparable to HHSearch, a method that uses profile-profile comparison, but at a substantially reduced computational cost. The software executable to run SVM-HUSTLE can be downloadedExternal link.
 
Latest Energy Smart Data Center Project Research Produces "FRED"
Researchers at Pacific Northwest National Laboratory have developed a real-time software tool, FRED (Fundamental Research in Energy Efficient Data Centers), to capture, monitor, analyze and store data from the Energy Smart Data Center Test Bed (ESDC). The aim of the ESDC project is to demonstrate advanced engineering and energy-efficient electronics and ideas related to high-performance computing (http://esdc.pnl.govExternal link). FRED measures a variety of parameters such as chilled water flow rates, temperatures, and electrical power usage. FRED's underlying technology is based on PNNL's experience in developing power plant, distribution, and facility monitoring and diagnostic systems for applications ranging from nuclear power generation to building management to public housing.
 
FRED consists of the ESDC-TBD monitoring system, a data collector, a central database, and a web-based graphical user interface client. Part of the ESDC-TBD monitoring system uses PNNL's patented Decision Support for Operations and Maintenance (DSOM) software architecture, an advanced, flexible diagnostic monitoring application for energy supply and demand systems. DSOM, currently deployed in several U.S. military installations and a large public housing project in New York City, measures the relevant parameters related to the building's air-handling, chilled water and electrical system.
Contact: Andres Marquez, andres.marquez@pnl.gov

PEOPLE:

Paul Messina Named ALCF Interim Director of Science
Dr. Paul Messina has been named interim director of science at the Argonne Leadership Computing Facility (ALCF). He will guide the ALCF science teams using the IBM Blue Gene/P system and help them achieve the best science output obtainable. Messina most recently served as distinguished senior computer scientist at Argonne National Laboratory and as adviser to the director general at CERN (European Organization for Nuclear Research).
 
During his illustrious career, Dr. Messina served as director of the Center for Advanced Computing Research at the California Institute of Technology (Caltech). He led the Computational and Computer Science component of Caltech's research project funded by the Academic Strategic Alliances Program of the Accelerated Strategic Computing Initiative. He also acted as director of Caltech Concurrent Supercomputing Facilities, assistant vice president for scientific computing, and faculty associate for scientific computing, Caltech. Furthermore, Messina was principal investigator for Teragrid, an open scientific discovery infrastructure combining leadership class resources at 11 partner sites to create an integrated, persistent computational resource. At Argonne, he held a number of positions, ranging from computer scientist to director of the Mathematics and Computer Science Division. Messina has served on many review and advisory committees in high-performance computing and grid computing. He was a member of the Board of Directors of the Global Grid Forum and chaired its advisory committee for several years.
Contact: Cheryl Drugan, cdrugan@mcs.anl.gov
 
Berkeley Lab's David Bailey Named "Superstar" of NASA Computing
At the 25th anniversary celebration of NASA Advanced Supercomputing (NAS) Division at Moffett Field, Calif., David Bailey of LBNL's Computational Research Division was named one of 25 "NAS Superstars." The list includes figures such as Ron Bailey (former NAS Chief), Bill Ballhaus (former Ames Director), Walt Brooks (former NAS Chief), Scott Hubbard (former Ames Director), Dave Cooper (former NAS Chief), Hans Mark (former Ames Director), Harry McDonald (former Ames Director) and others. Bailey was cited as "Senior Scientist 1984-1998: Expert in high-performance scientific computing who led development of the NAS Parallel Benchmarks that are still widely used to evaluate sustained performance of highly parallel supercomputers." He left NASA in 1998 to join Berkeley Lab, where he now serves as chief technologist for the Computational Research Division.
Contact: Jon Bashor, jbashor@lbl.gov
 
ORNL's Weigand Receives DOE's First Schlesinger Award
Gil Weigand of ORNL's Computing and Computational Sciences Directorate received the inaugural James R. Schlesinger Award from Secretary of Energy Samuel Bodman. The Secretary lauded Weigand for his "passion for excellence along with his ability to foster and implement the practices and values that are necessary for the protection of our nation." Weigand is credited with conceiving and implementing the Department of Energy's (DOE's) Accelerated Strategic Computing Initiative (ASCI), which pooled government programs and national laboratories to build the world's best high-performance supercomputers, and which has evolved into today's Advanced Simulation and Computing (ASC) Program. HPC and simulation at the ASCI level now pervade all areas of science and engineering.
 
Schlesinger, who was present for the award ceremony, was the first Secretary of Energy. The Schlesinger Award is the highest award in the newly established Secretarial Honor Awards Program and the highest nonmonetary award bestowed by the agency.
Contact: Jayson Hines, hinesjb@ornl.gov
 
PNNL's Moe Khaleel Appointed to CMC Editorial Board
Moe Khaleel, Laboratory Fellow and Director of Pacific Northwest National Laboratory's Computational Sciences and Mathematics division, has been appointed to the editorial board of Computers, Materials, & Continua (CMC). CMC publishes original research papers of reasonable permanent value in the areas of computational materials science and engineering at various length scales (quantum, nano, micro, meso, macro) and various time scales (picoseconds to hours). Both structural as well as functional materials, composite materials, and inorganic as well as organic materials are of interest. Papers which deal with computational modeling of the mechanics, physics, chemistry, and biology (and their interactions) of all modern materials are welcome. Papers that advance the paradigm of materials by design, from the bottom up or top-down, are strongly solicited.
Contact: Sue Chin, sue.chin@pnl.gov
 
PNNL Scientist Ian Gorton Led Development of Special Edition of Computer
Ian Gorton, Pacific Northwest National Laboratory's associate division director of Applied Computer Science, led the development of a special edition of Computer magazine on data-intensive computing, published in April. Computer is the flagship magazine of the IEEE Computer Society. Gorton, who is the chief architect for PNNL's Data Intensive Computing Initiative, proposed and worked with colleagues Paul Greenfield, CSIRO; Alex Szalay, Johns Hopkins University; and Roy Williams, Caltech, to solicit papers and referee the submissions. Gorton's editorial introduces the topic of data intensive computing and maps out research challenges for the community to address so that ever larger data collections can be handled in a scalable fashion. View the April issue at http://www.computer.org/portal/site/computer/index.jspExternal link. Registration is required.
 
NERSC Director Kathy Yelick Profiled in UK Scientific Computing Magazine
The February / March 2008 issue of the European magazine Scientific Computing World features a profile of NERSC Division Director Kathy Yelick titled "Living the Vision."
 
"For some computer scientists the idea of running a high-performance computer centre is worse than going over to the dark side," the article begins. "... So when Professor Kathy Yelick, a highly respected computer scientist from UC Berkeley, agreed to take over as director of the National Energy Research Scientific Computing Center (NERSC) at the Lawrence Berkeley National Laboratory, heads turned.... To move from the academic study of a field into actually building and running a service based on that research has got to be seen as a gutsy move. The phrase 'living the vision' springs to mind." The complete article can be read at..
 

FACILITIES/INFRASTRUCTURE:

ALCF Dedicated on April 21
Argonne National Laboratory celebrated the dedication of the Argonne Leadership Computing Facility (ALCF) during an April 21st ceremony at the lab.
 
"I am delighted to see this realization of our vision to bring the power of the Department's high performance computing to open scientific research," said DOE Under Secretary for Science Dr. Raymond L. Orbach. "This facility will not only strengthen our scientific capability but also advance the competitiveness of the region and our nation." The early results span the gamut from determining the origins of the universe and dark energy, to better understanding the molecular mechanism of Parkinson's disease progression to help focus the search for treatment.
 
Dr. Patricia Dehmer, DOE Office of Science Deputy Director for Science Programs, and Dr. Michael Strayer, DOE Associate Director of Science for Advanced Scientific Computing Research, attended the ALCF dedication along with Dr. Orbach and Congresswoman Judy Biggert.
 
Chinook to Enable Computationally Intensive Research Applications at EMSL
The first phase of "Chinook," a 163 teraflop/s HP Linux Cluster with 4,620 quad-core AMD Opteron processors, has been installed at the Environmental Molecular Sciences Laboratory (EMSL) at the Pacific Northwest National Laboratory. Chinook, housed at the EMSL Molecular Science Computing Facility, will enable computationally intensive research applications that require a large number of processors, such as the Molecular Science Software Suite (MS3), and future applications being developed in the areas of atmospheric aerosol chemistry, biological interactions and dynamics, science of interfacial phenomena, and geochemistry/biogeochemistry and subsurface science.
 
Phase 1 of Chinook has been installed with all current applications undergoing parallelization and scalability testing. Although representing only 25 percent of Chinook, Phase 1 will have the computing power of the MSCF current MPP2 supercomputer. Phase 2 of Chinook is slated to be installed and brought online by the end of FY08.
Contact:
Tom McKenna, tom.mckenna@pnl.gov
Kevin Regimbal, kevin.regimbal@pnl.gov
 
R.I.P. Cheetah: ORNL Supercomputer Has Left the Building
A former supercomputing heavyweight has been retired at Oak Ridge National Laboratory's (ORNL) National Center for Computational Sciences. The IBM Power4 system dubbed Cheetah performed dutifully throughout its six years of service. Ranked as the eighth fastest computer in the world in 2002, Cheetah was involved in numerous computational science breakthroughs. However, it is perhaps best known for providing 40 percent of the cycles for the U.S. contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, which in 2008 shared the Nobel Peace Prize with former Vice President Al Gore.
 
Cheetah comprised 27 cabinets, each containing one node. Each node's 32 Power4 processors ran at 1.3 GHz, giving Cheetah a peak performance of almost 4.5 teraflops. The system had 1.1 terabytes of memory and 40 terabytes of disk space.
Contact: Jayson Hines, hinesjb@ornl.gov
 

OUTREACH:

NERSC Director Kathy Yelick Delivers Three Keynotes in Eight Days
Kathy Yelick, who became director of the National Energy Research Scientific Computing Center in January, continues to be an in-demand keynote speaker. Between April 16-23, Yelick gave keynote talks in Florida, Oregon and California.
 
At the 22nd International Parallel & Distributed Processing Symposium in Miami, Yelick gave a talk on "Programming Models for Petascale to Exascale" on April 16. On Monday, April 21, she gave the welcoming keynote talk entitled "Multicore Meets Exascale: The Catalyst for a Software Revolution" at the 2008 Salishan Conference on High Speed Computing in Salishan, Ore. And on April 23, she presented "Programming Models for Manycore Systems" at the Programming System Conference in Santa Clara.
Contact: Jon Bashor, jbashor@lbl.gov
 
LLNL's Bronis de Supinski Spreads the Word on Performance, Debugging Tools
During March and early April 2008, LLNL researcher Bronis de Supinski made several invited presentations. At Virginia Tech, he presented a variety of activities from his work at LLNL. In particular, he focused on STAT, the Stack Trace Analysis Tool, a lightweight debugging solution developed in part under funding from SciDAC's Performance Engineering Research Institute (PERI). This tool automatically identifies task equivalence classes, sets of tasks exhibiting the same behavior, in MPI jobs. Given these equivalence classes, the user can apply a traditional debugger to representatives of each class, thus effectively reducing the scale at which they must explore programming errors. Recent work has extended the tool to the full BlueGene/L system, demonstrating scalability to over 100K tasks, a first for any debugging tool. A similar presentation was made at the High Performance Computer Software Week (HPCSW) during a workshop on tool infrastructures, centering around Open|SpeedShop.
 
De Supinski also made a presentation at HPCSW's autotuning workshop. In this presentation, he detailed techniques for modeling large parameter spaces, with a particular focus on large-scale application focus. These techniques were also the focus of a tutorial presented at ASPLOS XIII (Architectural Support for Programming Languages and Systems). In addition to covering the general modeling methodology developed partially under PERI funding, the autotuning presentation detailed recent work using the modeling methodology to guide dynamic concurrency throttling (DCT). DCT reduces the number of threads executing parallel regions in order both to improve performance and to reduce power consumption. Results on a dual core, quad socket platform demonstrate that these techniques can improve performance by 11.8 percent while reducing energy consumption by 17.0 percent.
 
U.S. QCD Collaboration Holds Annual All Hands Meeting at JLab
On April 4 and 5, 59 members of the national collaboration of lattice quantum chromodynamics theorists, USQCD, met at their annual All Hands Meeting at Jefferson Lab (Thomas Jefferson National Accelerator Facility). Attendees heard reports from the site managers of the parallel computing facilities dedicated to lattice QCD calculations at BNL, Fermilab, and JLab, as well as from the chairman of the USQCD Executive Committee and the contractor project manager of the Office of Sciences LQCD Computing Project (SC LQCD).
 
The All Hands Meeting also included talks describing proposals for computing time at the SC LQCD facilities and time awarded to USQCD at the ORNL and ANL Leadership Computing Facilities via the INCITE Program. The areas of scientific research covered by these proposals included the determination of fundamental parameters of the Standard Model (SM) of subatomic physics, calculations needed for precise tests of the SM, the study of the masses, internal structure and interactions of strongly interacting particles, and the study of theories for physics beyond the SM. Following the presentations, discussions were held by the attendees regarding scientific priorities. Following this discussion, the USQCD Scientific Program Committee allocated USQCD computing resources for the year beginning July 1, 2008. For more information about USQCD and the All Hands Meeting, select the following links...
 
ALCF Will Host May INCITE Performance Workshop
The ALCF is hosting an INCITE Performance Workshop for DOE INCITE users on May 7-8 at Argonne National Laboratory. This advanced workshop will give users an overview of performance and debugging tools available on the BlueGene/P to enhance application performance and scalability. As always, there will be plenty of time for assisted hands-on training.
Contact: Chel Lancaster, lancastr@alcf.anl.gov
 
LBNL's Horst Simon Gives Distinguished Lecture Series Talk at TACC
Horst Simon, Associate Lab Director for Computing Sciences at Berkeley Lab, recently gave a talk as part of the Distinguished Lecture Series in Petascale Simulation at the University of Texas at Austin. Simon discussed efforts to promote the growth of high performance computing without contributing to global warming in the talk, "The Greening of High Performance Computing - Will Power Consumption Become the Limiting Factor for Future Growth?" He also outlined the Lab's research projects that address the issue of reducing power consumption.
Contact: Jon Bashor, jbashor@lbl.gov
 
ORNL's Doug Kothe Guest Speaker at HPC User Forum
The NCCS's Doug Kothe was a keynote speaker at this year's HPC User Forum in Norfolk, Virginia. Kothe's talk, entitled "National Lab HPC Directions at ORNL," explored the future of HPC at ORNL, one of America's top supercomputing centers. Besides being a keynote speaker, Kothe was also named to the Steering Committee. The HPC User Forum is a regular gathering of industry, government, and academia to discuss technology and software trends, market dynamics, and possible collaborations to advance the state-of-the-art in HPC. This year's theme was computational fluid dynamics. The conference was held April 14-16 and included representatives from Boeing, NASA, and General Motors.
Contact: Jayson Hines, hinesjb@ornl.gov
 
Shanghai Professors Visit Computational Research Division at LBNL
A delegation from Shanghai Jiao Tong University visited Berkeley Lab this month to learn about computational research. Shanghai Jiao Tong is one of the premier universities in China, known for its science and engineering programs. The visiting group consisted of professors in electrical engineering and included Professor Wenjun Zhang, who also is vice president of the university. The group heard presentations on scientific data management by Arie Shoshani, image processing for cryo-electron microscopy by Chao Yang, visualization and analytics by Wes Bethel, machine learning and pattern recognition by Daniela Ushizima, and numerical methods for imaging by James Sethian.
 
LBNL's Juan Meza Tells Undergrads "How Math Will Help Save the World"
On April 19, Juan Meza, head of LBNL's High Performance Computing Research Department, was the invited speaker at the Sonoma State University Northern California Undergraduate Mathematics Conference, where he addressed the topic "The Role of Mathematics in Amplifying Science Research:  How Mathematics Will Help Save the World." The slides from his presentation are online at...
 
NCCS Offers HPC Conference for Students and Faculty
The NCCS continues to provide quality outreach and education initiatives to the wider HPC community through programs like the High Performance Computing and Applications Conference. The meeting, hosted by the NCCS, invited undergraduate, graduate, and postdoctoral students from universities across the southeastern United States to submit posters, abstracts, and papers for peer review. The students' presentations were then reviewed by a team of volunteers, and the feedback passed on to the students. The purpose of the conference, said Bobby Whitten of the NCCS User Assistance and Outreach Group, was to provide information that educators can incorporate into their curricula as well as provide students with a foundation in the basics of parallel programming.
 
Approximately 12 students and 12 faculty members attended from universities such as Western Kentucky, Clemson, and the Georgia Polytechnic Institute. Presentations included a method for the modeling of galactic collisions and the use of HPC for monitoring and evaluating natural disasters.
Contact: Jayson Hines, hinesjb@ornl.gov

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:47 AM