2008

March

ASCR Monthly Computing News Report - March 2008



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.

In this issue...
 
 
 
 

RESEARCH NEWS:

Berkeley to Host One of Two Microsoft/Intel Parallel Programming Centers
Computing sciences researchers at Lawrence Berkeley National Laboratory are part of a team leading a new research center in a partnership with Intel and Microsoft to accelerate developments in parallel computing and advance the powerful benefits of multicore processing to mainstream consumer and business computers. On March 18, Microsoft Corp. and Intel Corp. announced the creation of two Universal Parallel Computing Research Centers (UPCRC), the first at UC Berkeley and another at the University of Illinois at Urbana-Champaign. This is considered the first joint industry and university research alliance of this magnitude in the United States focused on mainstream parallel computing.
 
The Berkeley team will be led by David Patterson, a UC Berkeley professor of computer sciences and a scientist in Berkeley Lab's Computational Research Division. The funding for UC Berkeley's UPCRC, which Patterson directs, forms the foundation for the campus's Parallel Computing Laboratory, or Par Lab, a multidisciplinary research project exploring the future of parallel processing. Over the next five years, Intel and Microsoft expect to invest a combined $20 million in the two university centers, including $10 million at UC Berkeley. In addition to Patterson, LBNL members of the team include NERSC Division Director Kathy Yelick, and Professor James Demmel, who is also a researcher in Berkeley Lab's Computational Research Division. John Shalf, a researcher in computer architectures at NERSC, also participated in meetings that led to the creation of Par Lab.
Contact: Ucilia Wang, uwang@lbl.gov
 
SciDAC Team Develops Petascale-Ready Version of CCSM’s Atmospheric Model
The SciDAC project “Modeling the Earth System” is focused on creating a first-generation Earth system model based on the Community Climate System Model (CCSM). As these improvements will require petascale computing resources, the project is also working to ensure that CCSM is ready to fully utilize DOE’s upcoming petascale platforms. The main bottleneck to petascale performance in Earth system models is the scalability of the atmospheric dynamical core. Team members at Sandia, ORNL and NCAR have thus been focusing on the integration and evaluation of new, more scalable, dynamical cores (based on cubed-sphere grids) into the atmospheric component of the CCSM. The first model successfully integrated uses a new formulation of the spectral element method that locally conserves both mass and energy and has positive preserving advection.
 
This dynamical core allows the CCSM atmospheric component to use true two-dimensional domain decomposition for the first time, leading to unprecedented scalability demonstrated on LLNL’s BG/L system. The model scales well out to 96,000 processors with an average grid spacing of 25 km. Even better scalability will be possible when computing with a global resolution of 10 km, DOE’s long term goal (DOE ScaLeS Report, 2004). As part of the project’s model verification work, a record-setting one-year simulation was just completed on 64,000 processors of BG/L. This initial simulation was obtained using prescribed surface temperatures and without the CCSM land and ice models. Coupling with the other CCSM component models is the team’s current focus.
Contact: Mark Taylor, mataylo@sandia.gov
 
INCITE Team Simulates the Path from Open to Closed Potassium Channel
A team led by Professor Benoît Roux of Argonne National Laboratory and the University of Chicago has simulated in unprecedented detail the voltage-gated potassium channel, a membrane protein that responds to spikes of electricity by changing shape to allow potassium ions to enter a cell. Roux and his team are using the leadership computing facility at Oak Ridge National Laboratory to model the channel in open and closed states and determine the gating charge driving the change in conformation between the two states. If gating malfunctions - and it can go awry in various ways - cardiovascular or neurological disease can result. The results were presented at the Joint Meeting of the Biophysical Society held last month in Long Beach, Calif.
 
Roux and colleagues used a computer program called Rosetta to predict the three-dimensional structure of the channel protein and found that simulations of the open and closed states are stable. They simulated the motion of all atoms in the system using a molecular dynamics code for parallel processing called NAMD, which employs Newton's laws and an energy function to simulate protein behavior in steps on the order of one femtosecond, or trillionth of a second. By looking at how the channel moves in tiny, ultrafast increments, researchers can build a biologically meaningful picture of its dynamics. In 2007 the researchers used an INCITE award of 2.5 million processor hours on the NCCS's Cray XT Jaguar supercomputer to model the behavior of up to 350,000 atoms. They received a 2008 grant of 3.5 million hours on Jaguar to continue their studies.
Contact: Dawn Levy, levyd@ornl.gov
 
Scientific Community Pursuing Transformational Cyber Security Capabilities
As an outcome of a Cyber Summit held September 26-27 at Sandia National Laboratories, the Office of Science was given responsibility for developing an R&D initiative to address the overall national concern about cyber threats for DOE and convened an October meeting in D.C. to begin this process. In response, a grassroots community is taking shape. This community, organized by Deb Frincke (Pacific Northwest National Laboratory), Charlie Catlett (Argonne National Laboratory), Ed Talbot (Sandia National Laboratories), and Brian Worley (Oak Ridge National Laboratory), is working to develop a science-driven, proactive, innovative R&D agenda. The community seeks to apply scientific principles from mathematics, computer science, complex systems and other disciplines to pursue transformational cybersecurity capabilities and architecture, enabling a quantitative and proactive approach for addressing cyber threats for both classified and unclassified needs. Those interested in participating are welcome to contribute to papers through the community wiki (https://wiki.cac.washington.edu/display/doe/HomeExternal link) and are invited to join the twice-monthly teleconferences (information also on the wiki). The community is organizing a sequence of town hall meetings, the first of which brought some 50 scientists and practitioners together at Argonne National Laboratory in February. Frincke briefed the Advanced Scientific Computing Advisory Committee on behalf of the community on February 27, 2008. A second town hall meeting is anticipated in May or June.
Contact: Deb Frincke, deborah.frincke@pnl.gov
 
Wave Propagation Code, WPP, ported to BG/L at LLNL
A team of seismologists and applied mathematicians led by Artie Rodges and Anders Petersson at LLNL ported the earthquake simulation tool WPP to the BG/L system at LLNL. The overall goal of the project is to look at the propagation of waves in nature, whether they be seismic, electromagnetic or sound waves, all of which are governed by essentially the same mathematical equations. The team has spent several years developing new embedded boundary numerical techniques and encapsulating that mathematics research into an open source software simulation package called WPP. In the past month, this software has been ported to BG/L and demonstrated the ability to run an earthquake simulation in Iran on 32,000 processors. The computational domain was 3000 km on a side and modeled using 26.3 billion grid points. The effort was funded by the DOE NA-22 program (NNSA), but this effort could not have been accomplished without the fundamental research investment made by the ASCR office.
Contact: Lori Diachin, diachin2@llnl.gov
 
LANL's PAL Analyzes, Predicts Performance of Upgraded Jaguar at ORNL
The ASCR-funded Performance and Architecture Lab (PAL) at Los Alamos, which analyses and modes the performance of applications and systems of interest to the Office of Science, has completed modeling how two applications will perform on the Cray XT4 supercomputer “Jaguar” at Oak Ridge National Laboratory. PAL has developed novel methodologies for highly accurate performance modeling of full applications running on the largest systems in use anywhere. In response to a specific request from ASCR, PAL modeled the performance of the fusion code GTC and the combustion code S3D to predict how well the applications will perform on the upgraded system.
 
The analysis, published in a widely distributed report, shows that:
  • both GTC and S3D should scale very well on the upgraded Jaguar system;
  • the performance improvements from the upgraded system over its previous configuration will be 1.8x for GTC and 2.1x for S3D at 8,192 nodes;
  • communication contention in the 3D mesh can be significant at large scale, but because both applications are compute bound, the impact on overall run-time is not large
  • using PAL’s highly accurate models, and guided by them, system optimization in the area of routing and contention optimization could lead to significant performance improvements.
Future work will involve performance measurements on the actual system, using both PAL’s microbenchmark suite and applications suite (including GTC and S3D), and using the models to identify, quantify, and solve any revealed sources of performance degradation.
 
SciDAC's CEDPS Plays Star Role in the HEP Experiment
Argonne researchers participating in the DOE SciDAC Center for Enabling Distributed Petascale Science project are working with the high energy physics STAR experiment to leverage virtualization in distributing STAR applications. STAR works with complex experimental applications comprising approximately 2 million lines of C++ and Fortran code developed over a decade by more than 100 scientists. Such applications require complex, customized environments that rely heavily on the right combination of compiler versions and available libraries and dynamically loaded external libraries (depending on the task to be performed). In addition, STAR leans heavily on environment validation to ensure reproducibility and result uniformity across different platforms. These characteristics make it hard for STAR to leverage Grid computing: even if the experiment's complex codes can be made to work on platforms in various distributed centers, there is no guarantee of consistency among those centers.
 
To overcome these difficulties, Argonne researchers use virtualization techniques, which enable the STAR community to dynamically create virtual clusters that support STAR applications and can be run on a variety of resources. Building on proof of concept work last year, the researchers developed and improved methods allowing virtual clusters to be created dynamically. Most recently, the Argonne team ran full-scale STAR application tests on a 100-node virtual cluster hosted by Amazon's Elastic Compute Cloud, EC2, a Web service that provides resizable compute capacity in the cloud. The CEDPS group is continuing to support the STAR community in assessing the usefulness of a virtual-machine (VM) platform for their applications. The STAR applications will run on the Nimbus cloud at the University of Chicago and (for full-scale operation) on the Amazon EC2 platform.
Contact: Gail Pieper, pieper@mcs.anl.gov
 
Economist.com Cites Efficiency of LBNL's Proposed "Climate Computer"
A March 4 Economist.com article titled "Cool it!" describes how the lifetime cost of running a big data center now outstrips the cost of buying the hardware. Part of the reason is that the processors are often more powerful than is necessary for the tasks they actually do. The article cites Berkeley Lab's proposed "Climate Computer" as an energy-efficient alternative. Using arrays of less powerful and hence less power-hungry processors, the Climate Computer could consume "a hundredth of the power of an existing data centre without too much loss of computational oomph."  Read The Economist article at http://www.economist.com/displaystory.cfm?story_id=10795585External link. The white paper by Michael Wehner, Lenny Oliker, and John Shalf that presents the Climate Computer concept - "Towards Ultra-High Resolution Models of Climate and Weather" - can be found at http://www.nersc.gov/projects/SDSA/reports/uploaded/IJHPCA06_CAM_final.pdfExternal link.
 
PNNL's Data-Intensive Computing Approach Presented at Inaugural HPC Horizons Conference
Deborah Gracio, director of Pacific Northwest National Laboratory's Computational and Statistical Analytics division, spoke on the use of data-intensive and data-streaming applications in the field of high performance computing (HPC) at the inaugural meeting of HPC Horizons on March 11-13 in Palm Springs, CA. The two-day conference was attended by 125 people and featured speakers representing both traditional and emerging HPC applications, including a keynote address by Dr. J. Craig Venter, well known throughout the industry for his work in mapping the human genome. Gracio discussed how data-intensive computing can help relieve the massive ingestion of data that can often be a limiting factor to scientific discovery. She noted that advances in high-throughput instruments can easily overwhelm data storage and analytic capabilities, giving the example of proteomics, where next-generation mass spectrometers perform at about 1 percent utilization since running at higher utilization rates requires orders of magnitude of higher bandwidth from the instrument to the computer system. PNNL's approach performs real-time analysis on dedicated HPC platforms to avoid storing such enormous data sets. Gracio also discussed use of multithreaded architectures for applications that require irregular access to large-scale data, where PNNL has demonstrated high levels of scalability.
Contact: Deborah Gracio, debbie.gracio@pnl.gov
 
Scalability of the Auxiliary Space Maxwell Solver Demonstrated
Applied math researchers Panayot Vassilevski and Tzanio Kolev at LLNL demonstrated the scalability of the Auxiliary space Maxwell Solver (AMS) from the Hypre multigrid solver library for a large-scale electromagnetic simulation. The problem modeled a complicated coil in air using an unstructured mesh with local refinement. A weak scaling study was performed using approximately 600,000 unknowns per processor up to 1968 processors. Results demonstrated very slow growth in both the number of solver iterations and total time to solution as the problem size varied from 9 million unknowns to 1.2 billion unknowns. As far as we know this is the first known, provably scalable method for this type and size of problem, and the results of the weak scaling study are the fastest reported to date. This work was presented at the recent SIAM PP08 conference held in Atlanta, GA.
Contact: Lori Diachin, diachin2@llnl.gov
 
Data Profiling - An Investigative Tool for Scientists
Scientists and engineers often must carry out complex deterministic simulations involving considerable computational "noise." Obtaining derivatives for these noisy simulations is difficult and unreliable, and users frequently resort to derivative-free algorithms. Until recently, however, there has been no general method for comparing the performance of derivative-free algorithms on diverse computationally expensive applications - an important consideration for users operating within a restricted computational budget.
 
Researchers at Argonne National Laboratory and Cornell University have now developed a method for analyzing the performance of such algorithms. The method, called data profiles, was motivated by work being performed as part of the DOE SciDAC Universal Nuclear Energy Density Functional project. Like traditional performance profiles, data profiles are probability density functions. Unlike performance profiles, however, which provide relative performance measurements, data profiles provide crucial information about the percentage of problems that can be solved at a specified computational budget (expressed in terms of simplex gradients).
 
Using these complementary profiling techniques, together with a convergence test, the researchers benchmarked three typical derivative-free optimization solvers. The results were surprising, in that the model-based solver performed better than geometry-based solvers, even for noisy and piecewise-smooth problems.
Contact: Gail Pieper, pieper@mcs.anl.gov
 
Multithreaded Architectures for Emerging Applications Show Promise
Researchers at the Pacific Northwest National Laboratory (PNNL) are creating new applications and mapping existing applications to the Cray XMT platform with promising results. The Cray XMT, with its unique "massively multithreaded" architecture and large global memory, can be used to successfully execute applications that require access to terabytes of data arranged in a random and unpredictable manner. These applications, such as data discovery, bioinformatics and power grid analysis, are difficult to map to current distributed memory systems (where each processor has an independent memory) and have difficulty achieving scalable (proportional to the number of processors used) performance on such systems.  Researchers at PNNL have focused on two challenging applications.
 
One application, in the cyber security domain, involves large sets of network traffic data. Analysis is performed to detect anomalies in packet headers (snippets of Internet communication between computers) in order to locate and characterize network attacks, and to help predict and mitigate future attacks. The second application is in the biological domain and involves solvers for Boolean satisfiability problemsExternal link with applications to biological network analysis. The work is an effort to determine if variables may be assigned in such a way as to yield "true." For example, the solution to Boolean equations can determine whether electronic components will function according to their design or not. Preliminary results indicate that these challenging applications can achieve scalable parallelism on the XMT platform beyond what is possible on mainstream HPC platforms. Faster solutions to these problems can lead to earlier detection of cybersecurity threats as well as enhanced and more precise understanding of biological properties.
 
Study Shows Certain Benchmarks Do Not Accurately Represent TLB Behaviors of Real Applications
ORNL Future Technologies Group members Collin McCurdy and Jeff Vetter, along with Alan Cox of Rice University, will be presenting their paper "Investigating the TLB Behavior of High-End Scientific Applications on Commodity Microprocessors" in Austin, TX this April at the 2008 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS'08). The paper is the culmination of work, undertaken as part of the PetaSSI FastOS project, seeking to understand the translation lookaside buffer (TLB) behavior of scientific applications. The analysis shows that two benchmark suites that are understood to represent scientific application behavior (Standard Performance Evaluation Corporation Central Processing Unit and High Performance Computing Challenge) are not representative of the TLB behavior of important full-scale applications. Furthermore, the paper demonstrates that false conclusions drawn from benchmark TLB performance can have significant ramifications for application performance.
Contact: Dawn Levy, levyd@ornl.gov

PEOPLE:

ORNL Researchers Win Outstanding Mentor Awards
ORNL Computer Science and Mathematics Division researchers Forrest Hoffman, Nagiza Samatova, Sudharshan Vazhkudai and Tatiana Karpinets were given Oak Ridge Associated Universities' Outstanding Mentor Awards. The award is given to honor the outstanding commitment ORNL scientists and engineers make to students and teachers participating in the Laboratory's education programs. The keynote speaker for the ceremony was William Valdez, Director of the Office of Science's Office of Workforce Development for Teachers and Scientists. ORNL Director Thom Mason and Deputy Director for Science and Technology James Roberto presented the awards.
Contact: Dawn Levy, levyd@ornl.gov
 
LBNL Staff Edit, Contribute to Performance Characterization Issue of IJHPCA
Lenny Oliker of LBNL's Computation Research Division (CRD) and Rupak Biswas of NASA Ames Research Center guest edited a two-issue special edition of the International Journal of High Performance Computing Applications (IJHPCA) entitled "Performance Characterization of the World's Most Powerful Supercomputers."  The first issue (February 2008, Vol. 22, No. 1) has just been published. Subscribers can read it at http://hpc.sagepub.com/current.dtlExternal link.
 
A number of staff from LBNL's CRD and NERSC divisions contributed articles to this issue:
  • Oliker and Andrew Canning of CRD, along with Jonathan Carter and John Shalf of NERSC and Stéphane Ethier of Princeton Plasma Physics Laboratory, wrote "Scientific Application Performance on Leading Scalar and Vector Supercomputing Platforms."
  • Hongzhang Shan and Erich Strohmaier of CRD, with Ji Qiang of Berkeley Lab's Accelerator and Fusion Research Division, wrote "Performance Analysis of Leading HPC Architectures with Beambeam3D."
  • Charles Rendleman of CRD and Mike Welcome of NERSC co-authored a paper with Bronis R. de Supinski et al., "BlueGene/L Applications: Parallelism on a Massive Scale."
Contact: Ucilia Wang,uwang@lbl.gov
 
LBNL Climate Researcher Contributes to Reports on Global Warming Effects
Michael Wehner, a climate modeling researcher in Berkeley Lab's Computational Research Division, has contributed to two national reports on the impacts of climate change on transportation. According to a report issued in March by the National Research Council, climate change will affect every mode of transportation in the U.S. The greatest impact is expected to result from flooding of roads, railways, transit systems, and airport runways in coastal areas because of rising sea levels and surges brought on by more intense storms. Though the impacts of climate change will vary by region, it is certain they will be widespread and costly in human and economic terms, and will require significant changes in the planning, design, construction, operation, and maintenance of transportation systems.
 
The results were published in one of the two reports that contained research by Wehner, climate modeling researcher in Berkeley Lab's Computational Research Division. The first report, "The Potential Impacts of Climate Change on U.S. TransportationExternal link," draws upon five papers commissioned by the Transportation Research Board, which, like the National Research Council, is part of the National Academies. Wehner co-authored "Climate Variability and Change with Implications for TransportationExternal link," one of the papers used to produce the report. The second report, "Impacts of Climate Change and Variability on Transportation Systems and Infrastructure: Gulf Coast Study, Phase IExternal link," provides an assessment of the vulnerabilities of transportation systems in the region to potential changes in weather patterns and related impacts, as well as the effect of natural land subsidence and other environmental factors in the region.
Contact: Ucilia Wang, uwang@lbl.gov
 
INCITE Researcher Robert Harrison Explains MADNESS to NCCS Staff
Veteran INCITE researcher Robert Harrison recently participated in the NCCS's Seminar Series to discuss MADNESS (Multiresolution ADaptive Numerical Environment for Scientific Simulation), a novel programming framework that could potentially benefit researchers in a number of areas. Harrison, a computational chemist with ORNL, outlined the various levels of MADNESS, the lowest of which is a programming environment that makes it easy to write parallel programs without worrying about the more technical aspects of parallel computing. The second level provides fast numerical computation in many dimensions with guaranteed precision. It provides a very high-level programming interface (in terms of physical functions and operators) while striving to express essentially all available parallelism. Finally, Harrison discussed emerging applications in chemistry, molecular physics, and nuclear physics that are built within the framework.
Contact: Dawn Levy, levyd@ornl.gov
 

FACILITIES/INFRASTRUCTURE:

ESnet Holds Workshop to Gather Data on Networking Needs of Fusion Community
Scientists and program managers gathered at Gaithersburg, Maryland in March for a DOE Office of Science-sponsored workshop to identify the ESnet networking requirements for fusion research. The Fusion Energy Sciences network requirements workshop provided a forum to communicate with ESnet about the ways in which scientists from the Fusion Energy Sciences research program use the network. ESnet will incorporate the feedback into its infrastructure and service planning processes.
 
The workshop is part of the ESnet governance structure, and is designed to help ensure that ESnet meets the needs of researchers from all six program offices within the DOE Office of Science. ESnet held workshops for scientists in the Basic Energy Sciences and Biological and Environmental Research programs last summer. Each year, ESnet will run workshops for two program offices. Networking requirements can change because of new research facilities, upgrades to existing facilities and changes in the science process, as well as funding and policy changes.
Contact: Eli Dart, EDDart@lbl.gov
 

OUTREACH:

NCCS's Doug Kothe Discusses Petascale Survey in HPCwire
The HPCwire newsletter recently published an interview with Doug Kothe, the director of science for the National Center for Computational Sciences at Oak Ridge National Laboratory. Kothe elaborated on a recent user survey intended to gauge the needs and capabilities of researchers interested in running applications on the organization's petascale machines, scheduled to come online in 2008. "The survey's main goal was twofold: first, to elicit and analyze scientific application requirements for current and planned leadership systems out to the petascale; and second, to identify applications that would qualify for early access to ORNL's 250-teraflop and 1-petaflop systems."
 
Kothe pointed out the many similarities in the codes surveyed and said that while there are numerous challenges involved in petascale computing, the potential payoff is huge. "For the CHIMERA astrophysics code, the expectation is to increase the number of variables from 63 today to more than 1,000," said Kothe. "With the LAMMPS biology code, today the users are modeling the dynamics of 700,000-atom systems for 5 to 10 nanoseconds of model time per day of simulation time. With a petaflop system, users hope to increase to modeling multimillion-atom systems for 0.1 to 1.0 microsecond per day of simulation time."  Read the article at http://www.hpcwire.com/hpc/2150728.htmlExternal link.
 
ALCF's Introduction to INCITE Workshop Series a Success
Attended by team members from 14 of the 20 2008 INCITE projects at the Argonne Leadership Computing Facility, the ALCF's March 4-5 "INCITE Getting Started" workshop was well received and quite successful. By the workshop's conclusion, all teams present had made meaningful forward strides. With few exceptions, nearly all of the teams had code running on the Blue Gene/P, and left the workshop ready to move into the next phase of scaling and performance tuning. Additionally, the INCITE project teams made preparations to begin their science runs on the ALCF 100T Blue Gene/P by the end of March.
 
The March 6 "Introduction to the BlueGene/P" workshop for Blue Gene Consortium members introduced researchers to the architecture and capabilities of the BG/P, as well as providing hands-on assistance for coding, porting, and tuning. It was jointly hosted by the ALCF and the Blue Gene Consortium. One of the attendees commented on his scaling results for 32-molecule and 256-molecule water benchmarks running on the ALCF BG/P Surveyor system: "Notably, the 256 results are about twice as fast as on Blue Gene/L. Blue Gene/P's superior network really shines in this benchmark. We feel this promises excellent scaling."  Several researchers in attendance at the workshop successfully ported and tuned their applications with the help of both ALCF and IBM staff.
Contact: Cheryl Drugan, cdrugan@mcs.anl.gov
 
LLNL's Bronis de Supinski and Martin Schulz Co-Present Tutorial at ASPLOS '08
LLNL computer science researchers Bronis de Supinski and Martin Schulz co-presented a tutorial with their colleagues from Harvard and Cornell entitled "Learning and Inference Tutorial (LIT) for Large Design and Parameter Spaces" at the Thirteenth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '08) held in Seattle, WA. ASPLOS is a multidisciplinary conference for research that spans the boundaries of hardware, computer architecture, compilers, languages, operating systems, networking, and applications. The tutorial focused on techniques such as clustering, association and correlation analysis that have been developed for purposes of simulator-driven design evaluation and application performance modeling and prediction across a wide range of possible input parameters.
Contact: Lori Diachin, diachin2@llnl.gov
 
SciDAC Outreach Center Is Featured in Scientific Computing Magazine
An article by SciDAC Outreach Center Lead David Skinner titled "Reaching Out to the Next Generation of HPC Users" appeared in the January 2008 issue of Scientific Computing magazine. In the article, Skinner discusses the SciDAC Outreach Center's efforts to help new research communities gain access to HPC resources. Skinner leads the SciDAC Outreach Center as well as NERSC's Open Software and Programming Group. You can read the article at this linkExternal link.
Contact: David Skinner, DESkinner@lbl.gov
 
ORNL Offers Supercomputing Time to Universities
Oak Ridge National Laboratory will grant access to its supercomputing systems to university students and faculty through a collaborative program with Oak Ridge Associated Universities. Two grants will be awarded each year, with each recipient team receiving $75,000 for three years. The laboratory will make available its leading computer systems, relevant staff, and possibly other necessary resources to those teams that receive grants, helping them to perform the best research possible.
Contact: Dawn Levy, levyd@ornl.gov
 
NERSC's Bill Kramer to Discuss HPC Storage Issues at Conference
NERSC General Manager Bill Kramer will be a keynote speaker at the Storage Networking World Conference next month in Florida, presenting the technology and services provided by NERSC for archiving scientific data. Computerworld, a technology magazine and website, is hosting the conference. In his talk, titled "NERSC - Extreme Storage and Computation for Science," Kramer will discuss NERSC's storage, networking and computational requirements, as well as the current and future deployments of the NERSC Global Filesystem, a key component in the center's strategy to provide an integrated set of systems and services for handling petabytes of data in a highly parallel environment. The conference will be held April 7-10. Read more at http://www.snwusa.com/agenda.html.
Contact: Bill Kramer,WTKramer@lbl.gov
 
Bobby Whitten Speaks at University of Tennessee Science Forum
On February 22, Bobby Whitten of the National Center for Computational Sciences (NCCS) was the speaker at the University of Tennessee Science Forum. Whitten outlined the colorful history of Oak Ridge National Laboratory, and more specifically the history and possible future of the growing NCCS. After detailing the increasing role of supercomputing in today's most pressing scientific problems, such as clean energy and astrophysics, Whitten touched on the possible future and impact of tomorrow's generation of elite computers, wowing the audience with numbers nearly incomprehensible to the human brain; for instance, the fact that a petascale computer will be capable of performing a thousand trillion (quadrillion) calculations per second.
Contact: Dawn Levy, levyd@ornl.gov

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:46 AM