2009

April

ASCR Monthly Computing News Report - April 2009



In this issue...
 
 
 
 

RESEARCH NEWS:

University of Wisconsin Using Graphical Processing Units (GPUs) to Estimate the Desolvation Energies of Proteins

Graphical Processing Units (GPUs) are a type of processor primarily used for gaming and while ideal for intensive graphical calculations have been relatively intractable for most scientific calculations. Recently however, the Mitchell group at the University of Wisconsin have adapted a method to calculate the solvent accessible surface area (SASA) of a protein and, in conjunction with a new programming language (CUDA), illustrated the application of GPU processors to real biological calculations. For these types of calculations the GPU computers performed two orders of magnitude better than typical scientific computers. This work opens a promising new avenue for biological calculations, when the calculations are repetitive but not data intensive. This paperExternal link was the Editor’s pick in the April Journal of Computational of Biology, Vol. 16, Issue (4), 2009. This work is sponsored by DOE's Office of Biological and Environmental Science and the Office of Advance Scientific Computing Research (ASCR).

 
NYU Team Simulates Core/Edge Interaction in Fusion Plasma on Jaguar

Recent fusion simulations on Oak Ridge National Laboratory’s (ORNL’s) Jaguar supercomputer have verified what has long been speculated: the temperature and turbulence at the edge of a fusion plasma affect the temperature and turbulence of the plasma core. In the latest simulations, researchers led by New York University’s C.S. Chang used the XGC1 code on Jaguar, the fastest system in the world for open science, to verify that turbulence in the well-confined edge can penetrate the core and boost its temperature, a phenomenon long postulated experimentally but now verified via simulation.

These self-consistent (multiple-phenomena), multiscale simulations represent the first time the XGC1 particle-in-cell code, or one that tracks individual particles in a given state, has been used to simultaneously simulate both the edge and core. The data have yielded a treasure trove of information on not only the edge and core individually, but also the complex relationship that exists in their interaction. The team consumed more than 1 million CPU hours and gathered more than 1 terabyte of data in simulations that used 20,000 of Jaguar’s more than 30,000 cores. In the future, said Chang, the goal is to simulate the entire ITER device using all of Jaguar’s cores.

Contact. Jayson Hines, hinesjb@ornl.gov
 
Experts: Short-Term Snapshots of Temperature Data Can Be Misleading

In the hotly debated arena of global climate change, using short-term trends showing little temperature change or even slight cooling to refute global warming is misleading, write two climate experts in a paper to be published by the American Geophysical Union, especially as the long-term pattern clearly shows human activities are causing the earth’s climate to heat up. In their paper “Is the climate warming or cooling?” David R. Easterling of the National Oceanographic and Atmospheric Administration’s National Climatic Data Center and Michael Wehner of the Computational Research Division at Lawrence Berkeley National Laboratory note that a number of publications, websites and blogs often cite decade-long climate trends, such as that from 1998-2008, in which the earth’s average temperature actually dropped slightly, as evidence that the global climate is actually cooling.

However, Easterling and Wehner write, the reality of the climate system is that, due to natural climate variability, it is entirely possible, even likely, to have a period as long as a decade or two of “cooling” superimposed on the longer-term warming trend. The problem with citing such short-term cooling trends is that it can mislead decision-makers into thinking that climate change does not warrant immediate action. The article was accepted March 30, 2009 for publication in an upcoming edition of Geophysical Research Letters and has been cited in the New York Times and Washington Post.

Contact. Michael Wehner, MFWehner@lbl.gov
 
Sandia Computer Scientist Ron Minnich Publishes Paper on Plan 9

As part of a FastOS project, Sandia computer scientist Ronald Minnich has had a paper accepted to the International Supercomputing Conference (ISC09) to be held on June 23–26 in Hamburg, Germany. This paper will also be published in a special edition of “Computer Science – Research and Development,” Springer-Verlag, Germany. The paper describes some of the highlights of a three-year collaboration between Sandia, IBM and Bell Labs in bringing Plan 9 to the Blue Gene supercomputer, replacing both Linux on the I/O nodes and the IBM Light Weight Kernel (the “CNK”) on the compute nodes. They describe a compatibility environment which lets them run CNK binaries on Plan 9; show that they can equal the CNK performance on a simple app; and also show that, with the simple addition of 1 MB pages to Plan 9, they can equal CNK performance on the LLNL strid3 benchmark. They also describe some interesting ways they exploit the unique combination of the Plan 9 networking model and the Blue Gene networks. The paper is entitled “Experiences porting the Plan 9 research operating system to the IBM Blue Gene supercomputers.” ISC web site: http://www.supercomp.de/isc09External link

Contact. Ron Minnich, rminnich@sandia.gov
 
Argonne Researchers to Receive ISC Award for Paper

The paper “Toward Message Passing for a Million Processes: Characterizing MPI on a Massive Scale Blue Gene/P,” by P. Balaji, A. Chan, R. Thakur, W. Gropp, and E. Lusk of Argonne National Laboratory, has been chosen by the International Supercomputing Conference (ISC) Award committee as one of the winners of the 2009 ISC Award. ISC is Europe’s leading conference and exhibition on high-performance computing, networking, and storage. The paper will be presented at the ISC conference in Hamburg, Germany, June 23–26, 2009. This is the second year in succession that Argonne’s Mathematics and Computer Science Division researchers have won an outstanding paper award at the ISC.

Contact. P. Balaii, balaji@mcs.anl.gov
 
PNNL Research Focuses on Making Supercomputers Fault Tolerant

In supercomputers built with thousands of components, the failure of a single component, such as a processor or a hard disk, can require a scientist to re-run the entire calculation, an expensive proposition in terms of time and money. Pacific Northwest National Laboratory is leading a collaboration with Ohio State University and Oak Ridge National Laboratory in an ASCR project to make supercomputers more tolerant of faults. 

Higher-level programming models are becoming critical to help address the productivity challenge in high performance computing. A common feature of these models is a global address space that allows non-collective access to global data. While MPI has often been the focus of fault tolerance research, handling faults in the context of such models is becoming increasingly important. To this end, researchers are investigating techniques to handle faults in global address space programming models. The advent of processor virtualization has enabled efficient and transparent system-level approaches to checkpoint-restart for fault tolerance. Researchers have developed a scalable transparent system-level checkpoint-restart solution for applications based on global address space (GAS) programming models on InfiniBand clusters. The system exploits support for the InfiniBand network in the Xen virtual machine environment. A version of the Aggregate Remote Memory Copy Interface (ARMCI) one-sided communication library has been developed that is capable of suspending and resuming applications. Researchers have developed efficient and scalable mechanisms to distribute checkpoint requests and to backup virtual machine memory images and file systems. The results will be presented at the 2009 ACM International Conference on Computing Frontiers.

 
LBNL Invited to Meeting on the Future of SRM Standard and Protocols

Arie Shoshani and Alex Sim of Lawrence Berkeley National Laboratory’s (LBNL’s) Scientific Data Management Research Group have been invited to a workshop in Germany to contribute to the discussion of future directions to the Storage Resource Manager (SRM) standard. Shoshani’s group initiated the SRM standards, leads the standard specification in the Open Grid Forum, and has an implementation of an SRM, called the Berkeley Storage Manager (BeStMan) that is used by several sites in the LHC community. The goal of the workshop is to discuss how the current SRM is used by the experiments; what are the main functions needed by the LHC community; and the selection of a simplified (core) SRM functionality for the next generation SRM to be reviewed by the HENP experiments. The workshop will be held May 18–19 at DESY in Hamburg.

Contact. Arie Shoshani, shoshani@lbl.gov
 
Computer Code Enhances Understanding of Uranium Transport at Hanford

PNNL researchers continue to build highly scalable scientific applications on increasingly higher number of processor cores as exemplified by the continued restructuring of PFLOTRAN, a subsurface reactive multiphase flow and transport code used to simulate flow and radionuclide transport. This code now can tackle subsurface flow and transport problems with a billion degrees of freedom and effectively uses up to 131,000 processor cores, with the computational resources provided through the INCITE program. This problem size is by far the largest groundwater model run to date.

PFLOTRAN is currently being employed at PNNL to simulate uranium transport at the Hanford Site 300 Area in order to better quantify the mass of uranium entering the neighboring Columbia River. In addition, the code is being used by researchers outside the Laboratory to predict migration of sequestered carbon dioxide within deep geologic formations. New insights may offer scientific advances in environmental cleanup of Hanford and other contaminated sites in the DOE complex. PFLOTRAN is being developed by a multidisciplinary team from Pacific Northwest, Argonne, Los Alamos and Oak Ridge national laboratories and the University of Illinois at Urbana-Champaign as part of the DOE SciDAC-2 Groundwater project led by Peter Lichtner at LANL.

Contact. Glen Hammond, glenn.hammond@pnl.gov
 
Sandia Researchers Present at Banff

Sandians Pavel Bochev and Rich Lehoucq attended a Banff International Research Station (BIRS) workshop on “Advances and perspectives on numerical methods for saddle-point problems,” organized by H. Elman (U. Maryland), C. Grief (UBC), D. Schoetzau (UBC) and A. Wathen (Oxford), and presented results of their AMR-sponsored research. Bochev talked about a new approach to couple multi-physics problems using ideas from optimization and control. Lehoucq presented his recent work on constrained eigenvalue problems.

Contact. Pavel Bochev, pbboche@sandia.gov
 
LBNL’s Iancu to Present Three Papers at Meetings in Italy, Spain

Costin Iancu of LBNL’s Future Technologies Group will present a paper on “Scheduling Dynamic Parallelism on Accelerators” at the ACM Computing Frontiers Conference to be held May 18–20 in Ischia, Italy. In conjunction with the ACM meeting, Iancu will give two papers, “Scheduling Dynamic Parallelism on the CellBE” and “Runtime Optimization of Vector Operations on IBM Power5 Clusters” at the IBM ScicomP and SP-XXL meetings being held May 18–22 in Barcelona, Spain.

Contact: Costin Iancu, cciancu@lbl.gov
 

PEOPLE:

PNNL Scientist Chosen as the Henry Darcy Distinguished Lecturer.

Each year an outstanding ground water professional is chosen by a panel of scientists and engineers as the National Ground Water Research and Education Foundation’s (NGWREF) Darcy Lecturer to share his or her work with their peers and students at Universities throughout the country and internationally.  The 2010 honoree, the 24th and the first from a DOE Laboratory, is Dr. Timothy Scheibe, a staff scientist in the Hydrology Technical Group at Pacific Northwest National Laboratory.  Scheibe has made major contributions to the field of groundwater modeling. His multidisciplinary and integrative approaches to computational modeling have brought new insights into the scaling of geochemical processes affecting contaminant transport and innovative methods to couple genome-based mechanistic understanding of biological processes with traditional groundwater modeling codes.

Contact. Robert T. Anderson, SC 23.1, 301-903-5549
 
Hispanic Business Magazine Names CRD’s Cecilia Aragon a 2009 Woman of Vision

Hispanic Business magazine has honored Cecilia Aragon as one of 25 Women of Vision in 2009. As a staff scientist in the Computational Research Division at the Department of Energy’s Lawrence Berkeley National Laboratory, Aragon researches and develops collaborative visual interfaces to foster scientific insight. She is also a founding member of Latinas in Computing and is active in diversity and outreach programs at the Lab.

Before coming to the Lab in 2005, Aragon was a computer scientist at the NASA Ames Research Center and CEO of Top Flight Aviation, where she was an air show and test pilot and aerobatic champion. She received a Ph.D. in computer science from the University of California, Berkeley, and a Bachelor of Science degree in mathematics from the California Institute of Technology. She has authored or co-authored 30 peer-reviewed publications and over 100 other publications in computer science and astrophysics.

 
ANL’s Ray Bair Discusses Exascale Software Challenges

Ray Bair, chief computational scientist in the Computing, Environment, and Life Sciences directorate at Argonne National Laboratory, gave an invited presentation at the February 2009 meeting of the IBM Deep Computing Institute External Advisory Board. Deep computing promotes a multipronged approach—new tools, facilities, and paradigms—for addressing the most challenging problems in science. In his talk, titled “Enabling Grand Challenge Science at Petascale and Beyond,” Bair identified five major software challenges to realizing exascale computing: improved programmability to million-way parallelism, new applications that make better use of deeper hierarchy, tools to exploit heterogeneous hardware, improved handling of the data tsunami, and novel knowledge discovery tools for exascale platforms and data.

Contact. Ray Bair, bair@mcs.anl.gov
 
Berkeley Lab’s Juan Meza Named to CITRIS Committee

Juan Meza, head of the High Performance Computing Research Department in Berkeley Lab’s Computational Research Division, has been appointed to the technical executive committee for the University of California’s Center for Information Technology Research in the Interest of Society—Computational Science and Engineering (CITRIS-CSE) program. The program brings together UC campuses in Berkeley, Davis, Merced and Santa Cruz and LBNL under one umbrella to conduct and coordinate educational and outreach efforts. Other committee members are Jim Demmel, Berkeley; Ben Yoo, Davis; Mayya Tokman, Merced; and Nick Brummell, Santa Cruz. The committee will meet biannually to discuss administrative, instructional and research resource needs and provide high-level scientific advice and to help to secure funding and promote the CSE program.

Contact. Juan Meza, JCMeza@lbl.gov
 
LBNL’s Wang Gives Invited Talk at Semiconductor Workshop in Beijing

Lin-Wang Wang, leader of the Berkeley Lab team that won the 2008 ACM Gordon Bell Prize for Algorithm Innovation and member of LBNL’s Scientific Computing Group, was an invited speaker at the International Workshop on Quantum Manipulation in Low-Dimensional Semiconductors held April 28–30 in Beijing, China. Wang spoke about his team’s work in semiconductor nanostructure calculations. On May 1, Wang visited the Beijing Semiconductor Institute to meet with Shu-Shen Li, his collaborator for CMOS simulations. For more information on the workshop: http://lib.semi.ac.cn:8080/xshy/index.htmExternal link

Contact. Lin-Wang Wang, LWWang@lbl.gov
 

FACILITIES/INFRASTRUCTURE:

ESnet Wins Excellence.Gov Award

DOE’s Energy Sciences Network (ESnet), a high-speed network linking tens of thousands of researchers around the nation, was honored April 14 with an Excellence.Gov award for its excellence in leveraging technology. The Excellence.Gov awards are sponsored by the Industry Advisory Council’s (IAC) Collaboration and Transformation Shared Interest Group and recognize the federal government’s best information technology (IT) projects. This year’s theme, “Transparency: Using IT to improve the interaction between Government and its Stakeholders,” focused on how government organizations use IT transparently to improve the public’s information gathering abilities, or an agency’s ability to deliver information to the public or a particular constituency. A panel of 25 judges—federal government and industry executives—reviewed the nominations and selected ESnet as the winner in the area of “Excellence in Leveraging Technology,” one of five award categories. The winners were recognized at a ceremony in Washington, D.C.

ESnet was honored for ESnet4, a recently completed network infrastructure providing highly reliable, high-bandwidth connectivity to support and advance the United States’ scientific competitiveness and capabilities by linking scientists at national laboratories and universities across the country. ESnet is funded primarily by the Office of Science and managed by Lawrence Berkeley National Laboratory.

Contact. Jon Bashor, jbashor@lbl.gov
 
PNNL Researchers Optimize Portable Runtime Library for Leadership Computers

As the increasing number of processors in supercomputers makes programming increasingly difficult, the use of computer languages and libraries to hide complexities by efficiently handling communication is gaining more attention. PNNL researchers have developed an integrated data and task management system in the context of the PNNL-developed Global Arrays (GA) programming model. GA provides programming interfaces for managing shared arrays based on the partitioned global address space programming model.

GA, together with its underlying communication library ARMCI (Aggregate Remote Memory Copy Interface), was ported to several DOE leadership class machines such as the BlueGene/P supercomputer and Cray XT4, allowing scientific applications to better exploit these supercomputers. The experiences with porting to the BlueGene/P supercomputer were presented at the P2S2 2009 conference. ARMCI was extended to support flexible process groups to enable a more dynamic task management system. These new features were used to demonstrate improved scalability on candidate applications with complex task management requirements, in particular involving dynamic load balancing of multi-process tasks. Despite numerous implementations of the existing MPI one-sided standard over the last 11 years, it has been perceived as too restrictive for actual application use. This ARMCI implementation, though less efficient than an implementation closer to the hardware, demonstrated performance competitive with existing MPI one-sided calls, while supporting more flexible progress rules and a richer set of functionality than the original MPI one-sided model. This work was published in the proceedings of HPC Asia 2009.

 
ESnet Connects STAR to Asian Collaborators

DOE’s’s Energy Sciences Network (ESnet), the Korea Research Environment Open Network2 and the Global Ring Network for Advanced Applications Development recently achieved a sustained data transfer rate of 1 gigabit per second (Gbps), equivalent to transporting 120 minutes of video per second, between the Brookhaven National Laboratory in New York and the Korea Institute of Science and Technology Information (KISTI) in Daejeon, South Korea. This capability will make 20 percent more data available to the nuclear physics community per year, and close a digital divide between U.S. and Asian physicists.

According to Jerome Lauret, the software and computing project leader of the Solenoidal Tracker at RHIC (STAR) experiment based at the Department of Energy’s Brookhaven National Laboratory, the new steady 1 Gbps connection speeds up the distribution of research-ready STAR data in two ways. It allows the team to export a portion of the raw STAR data to KISTI for mining, while BNL proceeds with other data-mining activities. The additional data-mining at KISTI will make 20 percent more STAR data available to the research community per year. As KISTI mines the data, it will also be storing the results. This means that collaborators across China, Korea and India will not have to wait for massive STAR datasets to transfer from New York. Now the information is much closer in Asia and will flow much faster.

Contact. Jerome Lauret, jlauret@bnl.gov
 
Argonne Introduces New Computer Testbed

Argonne National Laboratory hosted a full-day workshop March 30 to introduce researchers to the NVIDIA Tesla C1060 GPU computer. The computer was acquired by the Mathematics and Computer Science Division, with funding from DOE, as an exploratory testbed for research and development. Tesla is built out of nodes with 2 GPUs each, with 240 computing cores and 4 GB of memory per GPU. GPUs are dedicated graphics rendering devices that are probably best known from their use in games for manipulating and displaying computer graphics. The workshop was organized to enable interested researchers to explore the capabilities of a GPU-based system for capitalizing on heterogeneity in high-performance computing systems. Based on their initial experience, several groups — including researchers in genome sequencing, numerical libraries, and compiler design — plan to continue using the platform for research in parallel and numerical libraries and visualization.

Contact. Gail Pieper, pieper@mcs.anl.gov
 
PNNL Upgrades Cray XMT to Handle Larger and More Complex Programs

PNNL’s Center for Adaptive Supercomputing Software (CASS) is upgrading its Cray XMT to 64 processors and 512 GB of shared memory to allow CASS scientists to study the scalability of irregular algorithms used in national security, information processing, and power grid applications. These applications are some of the most difficult challenges in high-performance computing and cannot be run on any other computer system. Researchers will be able to run much larger and more complex problems and accelerate the development of new and improved algorithms.

The Cray XMT supercomputing system is a scalable, massively multithreaded platform with globally shared memory architecture for large-scale data analysis and data mining. The system is built for parallel applications that are dynamically changing, require random access to shared memory, and typically do not run well on conventional systems. Multithreaded technology is ideally suited for tasks such as pattern matching, scenario development, behavioral prediction, anomaly identification, and graph analysis.

Contact. John Feo, john.feo@pnl.gov
 
Unique Workflow Developed by PNNL for Data Management

SciDAC Scientific Data Management researchers at PNNL have developed a prototype Kepler workflow for the Atmospheric Radiation Measurement program. Scientific applications are often structured as workflows that execute a series of interdependent, distributed software modules to analyze large data sets. The order of execution of the tasks in a workflow is commonly controlled by the workflow engine. This workflow is unique in that it integrates the SDM Center’s Kepler scientific workflow platform with the PNNL-developed MeDICI Integration Framework (MIF). The MeDICI technology provides a scalable, component-based architecture that efficiently handles integration with heterogeneous, distributed software systems. Researchers were able to demonstrate, through this approach, the ability to create a clear separation of concerns in the design and implementation of a complex workflow application. The resulting solution adopts a layered architecture that promotes a separation of concerns between code integration and data management issues, and workflow application construction.

More importantly, the resulting workflow has the potential to allow scientists to modify and adapt the underlying Value Added Product (VAP) to meet their needs by changing the Kepler workflow, instead of requiring a developer to modify scripts. A paper has been accepted, but not yet published, on this work:  J. Chase, I. Gorton, C. Sivaramakrishnan, J. Almquist, A. Wynne, G. Chin, T. , “Kepler + MeDICi –Service-Oriented Scientific Workflow Applications,” In Proceedings of International Conference on Web Services (ICWS 2009), Los Angeles, CA, July 2009.

Contact. Terence Critchlow, terence.critchlow@pnl.gov
 
ALCF Researchers Improve Software Performance on the Blue Gene/P

Researchers from the Argonne Leadership Computing Facility (ALCF), Argonne’s Center for Nanoscale Materials (CNM), the Technical University of Denmark and University of Copenhagen are collaborating to work on GPAW, a software package for performing density functional calculations, a quantum mechanical method for computing the properties of matter. The collaboration’s primary goal is to improve the performance of GPAW on the IBM Blue Gene/P at the ALCF, and to this end, enable the CNM to characterize next-generation catalytic materials. An additional layer of parallelization is needed to scale GPAW to a larger number of nodes on the Blue Gene/P. The researchers have implemented an initial version of this algorithm and continue to improve it.  Progress also was made on related performance improvements.

Contact. Nichols A. Romero , naromero@alcf.anl.gov
 

OUTREACH & EDUCATION:

SciDAC Researchers to Present at HPC Adoption ’09 Conference

David Keyes, PI for the SciDAC Towards Optimal Petascale Simulations (TOPS) project; David Skinner, head of the SciDAC Outreach Center; and Jeffrey Vetter, an investigator on the SciDAC Performance Engineering Research project, will give presentations at the May 11–13 HPC Adoption ’09 Conference being held in Burlingame, CA

According to the conference website, “The computing industry is rapidly redefining itself by aggressively shifting away from increasingly higher clock speeds and smaller transistors towards multi-core architectures and alternative processing technologies including FPGAs and GPUs. Business users and software developers trained to think serially badly need new toolsets, business applications, and software integrated development environments (IDEs) that allow them to take advantage of increasingly complex parallel hardware. . . Join the leading industry, academic, and scientific minds at HPC Adoption-09 in a unique forum to address the challenges of HPC adoption and the latest trends driving this revolution.” Keyes will discuss “High Performance Computation and the Role of Algorithms” and Skinner will talk about “HPC Lifecycles in Scientific Computing: How Parallelism Happens.” The title of Vetter’s talk, slotted in the session on Emerging Technologies and the HPC Ecosystem, was not available.

 
ORNL Workshop Introduces Users to Cray XT5 system

The DOE-funded Oak Ridge Leadership Computing Facility (OLCF) and National Science Foundation-funded National Institute for Computational Sciences held the 2009 Cray XT5 workshop, “Climbing to Petaflop on Cray XT” at ORNL April 13–16. ORNL staff introduced the Cray XT5 system to principal investigators and their research teams. During this four-day workshop, users participated in hands-on sessions with ORNL staff to become familiar with the supercomputer’s new features. The staff also led seminars and made presentations covering the architecture, issues, and effective programming of the Cray system.

“Engaging with computational and computer science users on a personal, face-to-face level afforded by our users’ meeting is the best way to ensure that their needs, both now and in the future, are met,” OLCF Director of Science Doug Kothe said. “It also helps to forge long-lasting research collaborations between center staff and the science teams with allocations on our systems.”

Contact. Jayson Hines, hinesjb@ornl.gov
 
ALCF Offers Leap to Petascale Workshop

INCITE users: Ready to run your research project on 40 racks of the Blue Gene/P?  Then come “Leap to Petascale” on May 27–29 at a workshop being held at the Argonne Leadership Computing Facility (ALCF). Users will learn about the ALCF and the petascale resources available to them. Then, ALCF performance engineers will help users scale and tune their applications on the 40 racks. Attendees must have ported their code to at least one rack of the Blue Gene/P and must have a current account with the ALCF.  This is an especially good opportunity for anyone considering applying for a 2010 INCITE award. Proposals for INCITE are due July 1, 2009.

Contact: Chel Lancaster, lancastr@alcf.anl.gov
 
ORNL Hosts Global Audience at HPSS Users’ Forum

An international audience shared problems and solutions to data storage challenges at HUF 2009, the annual High Performance Storage System (HPSS) Users’ Forum held March 11–13 in the Joint Institute for Computational Sciences auditorium at ORNL. Organizers expected 35 attendees, but more than 70 participants from national laboratories, universities, government, and private industry—from the U.S. and abroad—came to the meeting, which was hosted by the OLCF and the National Institute for Computational Sciences.

Stanley White, site administrator of HPSS operations at ORNL, and Arthur (Buddy) Bland, director of the Oak Ridge Leadership Computing Facility, gave the introductory talks. There were nine site presentations, in which managers described how they are configuring their storage systems to meet the needs of data generation. In a further 25 talks over two days, participants described the status of their individual HPSS systems as well as their plans and wish lists for new or enhanced features. Vendors that made presentations included DataDirect Networks, SUN, Spectra Logic, and IBM.

Contact. Jayson Hines, hinesjb@ornl.gov
 
LBNL’s David Bailey Gives Two Talks at Iowa’s Grinnell College

David Bailey, chief technologist for LBNL’s Computational Research Division, was the invited speaker for the April 9 talk in the Grinnell College Scholars’ Convocation Lecture Series. Bailey, who drew an audience of several hundred students, spoke on “Computing: The Third Mode of Scientific Discovery.” Later in the day, he also gave a talk in the college’s math department. According to the school web site, the “Scholars’ Convocation enriches the College’s academic community by bringing notable speakers to campus. Implemented by George Drake ’56 during his term as College President, Convocations present topics of outstanding intellectual or cultural interest.”

Contact: David Bailey, DHBailey@lbl.gov

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:41 AM