2008

June

ASCR Monthly Computing News Report - June 2008



The monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by ASCR Program Managers and Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia National labs. Contact information and links to additional information, where available, are included with each article.

In this issue...
 
 
 

RESEARCH NEWS:

Researchers Use Jaguar to Explore Mechanism for Proton Transfer in Water
A team of materials scientists led by Jorge Sofo of Penn State University and Thomas Schulthess of Oak Ridge National Laboratory (ORNL) has used the Jaguar supercomputer at Oak Ridge's Leadership Computing Facility (LCF) to successfully simulate the behavior of water in the presence of the common catalyst titanium dioxide. The work not only improves our understanding of a process that is already important in areas such as fuel cell design and geosciences, it also prepares the way for simulations of ever more complex systems. "This whole simulation sets the stage for a lot more work on more complicated systems," explained Paul Kent, an ORNL staff member who worked extensively on the VASP computer application used in the research. "This is much more than a proof of concept because we've got a lot of science out of this, but the idea is obviously to move on to more complicated materials."
 
Specifically, the team simulated the process by which water passes protons from one molecule to another. The computational scientists were able to compare their results with results from a team of experimentalists led by Dave Wesolowski of ORNL, who evaluated the same system of water and titanium dioxide molecules using neutron-scattering techniques. The collaboration illustrates the benefits of experiment and computer simulation working together. "One of the cool things about this work is that the neutron scattering gives us a fingerprint for the dynamics of the water," Kent explained. "And we can, in our simulations, go off and compute that fingerprint as well."
Contact: Jayson Hines, hinesjb@ornl.gov
 
LANL Researchers Develop New Multilevel Multiscale Mimetic (M3) Method
Quantitative simulations of flow in highly heterogeneous porous media are needed to drive scientific discoveries as well as guide policy decisions. The multiscale challenge in these problems arises because the fine-scale spatial structure and temporal coupling strongly influence coarse-scale properties of the solution. Consequently, the direct numerical simulation of flows in large domains (e.g., reservoirs and aquifers) over long times remains intractable for even the most advanced supercomputers. Moreover, employing various spatially averaged parameters in a model of the same basic form is generally inadequate, providing only a qualitative description of the flow.
 
The new multilevel multiscale mimetic (M3) method, developed by J. D. Moulton, K. Lipnikov, and D. Svyatskiy of Los Alamos National Laboratory (LANL), is an important step in the development of a framework that will provide a quantitative description of flow in porous media. Specifically, in M3 the sub-grid modeling technique for single-phase flow proposed by Kuznetzov (http://dx.doi.org/10.1163/156939506779874617External link) is extended with a new robust and efficient method for estimating the flux coarsening parameters and is used recursively to generate a very accurate hierarchy of discrete models. The LANL researchers have demonstrated the potential of this approach for a two-phase flow model of water injection in a two-dimensional oil reservoir. There they achieved coarsening factors of up to 64 in each coordinate direction, in contrast to typical coarsening factors of 10 or less, and reduced the time of the pressure solve by a factor of 80. In addition, they demonstrated that with a simple and efficient temporal updating strategy for the coarsening parameters, they achieve an accuracy comparable to the fine-scale solution of the two-phase model, but at a fraction of the cost. This work was recently published in J. Comput. Phys., 227(14), pp. 6727-6753, 2008 (http://dx.doi.org/10.1016/j.jcp.2008.03.029External link).
 
Moulton, Lipnikov, and Svyatskiy are continuing their work on developing this new class of multilevel multiscale mimetic techniques. They are focusing on developing new hierarchical error estimation and error control that will enable efficient and quantifiably accurate time evolution. They will explore the use of these features in sensitivity analysis and uncertainty quantification for stochastic models of the porous medium. In addition, they are working on extending the underlying principles of the M3 framework to other multiscale application areas.
Contact: David Moulton, moulton@lanl.gov
 
Lattice QCD Research Logs More than 100M Core Hours on ALCF's BG/P
Scientists conducting research in lattice QCD have used more than 100 million core hours on the Blue Gene/P at the Argonne Leadership Computing Facility (ALCF). The project aims to deepen the understanding of the interactions of quarks and gluons, the basic constituents of 99% of the visible matter in the universe, and will play a key role in ongoing efforts to develop a unified theory of the four fundamental forces of nature. INCITE resources are being used to generate gauge configurations with up, down, and strange quarks on lattices that are sufficiently fine-grained and have sufficiently small up and down quark masses to enable the extrapolation of key quantities to their physical values found in nature.
 
The BG/P has accelerated the generation of the gauge configurations. Significant progress has been made in simulations with two different implementations of the quarks: domain wall and staggered. The domain wall configuration generation is going extremely well, with a statistically meaningful ensemble now available for a lattice size of 323 x 64. Generation of 483 x 64 ensembles has also been demonstrated. These are the largest domain wall lattices ever attempted and will be the central focus as soon as more statistics from the smaller ensemble have been obtained. Substantial analysis for K meson physics is under way, and analysis needed to study nucleon structure is starting. For the staggered quarks, a set of runs with a lattice spacing of 0.06 femtometer (fm) is nearing completion, and a new ensemble with a spacing of 0.045 fm and lattice size of 643 x 192 is about one-fourth complete. These are the most challenging staggered ensembles generated to date. These configurations will greatly improve the accuracy of the research team's determination of a wide range of physical quantities, including the decay properties of particles containing heavy quarks, which are important for testing our current theories of fundamental physics.
Contact: James Osborn, osborn@alcf.anl.gov
 
Sandia-Led Team Finds a Better Way to Solve the Graphical Traveling Salesman Problem
Researchers at Sandia National Laboratories, Emory University, and Arizona State University have recently developed a new mathematical programming formulation for the graphical traveling salesman problem (GTSP). This is a variant of the classical TSP, where a salesman must visit every city while traveling a minimum total distance. In GTSP, the intercity routes are restricted. The TSP problem has applications in stocking warehouses, very large-scale integration, and computational biology, and serves as a fundamental subroutine in many other optimization problems. The current solution technology for GTSP involves translating to a dense TSP. That is, the salesman can move between any pair of cities. The new formulation allows direct solution of the GTSP. When the problem is sparse - for example, when the number of legal city-to-city trips is proportional to the number of cities - this direct computation can be orders of magnitude faster than previous methods. In preliminary computational experiments with small examples, researchers have observed speedup proportional to the number of cities.
Contact: Cindy Phillips, caphill@sandia.gov
 
PNNL Applies Machine Learning to Data-Intensive Problems
Pacific Northwest National Laboratory (PNNL) has a fast growing body of research in combining support vector machines (SVMs) with supercomputing to tackle innovative problems in bioinformatics, including published research in Bioinformatics and presentations at the Sixth International Conference on Machine Learning and ApplicationsExternal link. SVMs are a statistical learning algorithm for classification and are especially powerful for learning classification tasks on noisy and non-linear data, particularly in biology. However, biological data is often not only noisy, but extremely large. For example, for peptides composed of six residues (amino acids) there are over 64 million possibilities, each with a unique mass spectrum. Training SVMs at this scale is nearly impossible; but through the integration of statistical sampling routines and high-performance computing, modeling problems with SVMs related to proteomics and sequence analysis have significantly improved the analytical task of identification. These capabilities will have a cascading effect into other problems in bioinformatics and biology, and the ability to apply SVMs to data-intensive problems offers the predictive capability in many domains.
Contact: Bobbie-Jo Webb-Robertson, webb-robertson@pnl.gov
 
Argonne Researchers Conduct Parallel Volume Rendering on the IBM Blue Gene/P
As data sizes increase, software volume rendering on supercomputer architectures presents an attractive alternative to rendering on graphics clusters. Parallel volume rendering algorithms have been implemented on supercomputers. But now, for the first time, researchers at Argonne National Laboratory have implemented a parallel ray-casting volume rendering algorithm on the IBM Blue Gene/P and have demonstrated its scalability to over 10,000 cores.
 
The algorithm is written using MPI for both communication and collective I/O and is based on a direct-send compositing approach. A set of experiments were run under a number of different conditions, including dataset size, number of processors, low- and high-quality rendering, offline storage of results, and streaming of images for remote display. The dataset used - 30 time steps from a supernova simulation of 200 time steps - was made available by researchers through the Department of Energy's SciDAC Institute for Ultrascale Visualization. The image size was chosen so that the number of pixels in one dimension of the image was twice the number of voxels in one dimension of the volume.
 
One large-scale result thus far involved an output image of 16002 pixels rendered from 8643 voxels in a frame time of approximately 6 seconds end to end, including I/O, which accounts for over 5 seconds of the total time. Researchers are investigating ways to further reduce or mitigate the effects of I/O latency in large visualizations such as this. In this test, each time step was 2.5 GB, or 0.6 billion voxels, and each resulting image was 2.5 megapixels. Based on these results, the researchers believe that the new method will be most useful for datasets greater than several gigavoxels and image sizes larger than several megapixels.
Contact: Gail Pieper, pieper@mcs.anl.gov
 
Blue Ribbon Panel Produces Report on DOE's Applied Math Program
An independent panel composed of prominent figures from the applied and computational math community has studied possible new directions for the DOE applied mathematics program. Their report, "Applied Mathematics at the U.S. Department of Energy: Past, Present, and a View to the Future," has been released and is available for comment and discussion in the community. The report outlines several areas of research for advancing mathematics for modeling, simulation, and analysis of complex systems.
 
The report is based on the work of a committee chaired by David Brown of Lawrence Livermore National Laboratory. Panel members included John Bell, Lawrence Berkeley National Laboratory; Donald Estep, Colorado State University; William Gropp, University of Illinois Urbana-Champaign; Bruce Hendrickson, Sandia National Laboratories; Sallie Keller-McNulty, Rice University; David Keyes, Columbia University; J. Tinsley Oden, The University of Texas at Austin; Linda Petzold, University of California, Santa Barbara; and Margaret Wright, New York University.
 
Interested members of the applied math community are encouraged to read the report at http://brownreport.siam.orgExternal link and submit comments or suggestions through that website to SIAM, who will collect those comments, post them, and forward them to the relevant people at DOE.
Contact: David Brown, dlb@llnl.gov

PEOPLE:

LBNL's Juan Meza to Receive 2008 SACNAS Distinguished Scientist Award
Juan Meza, head of the High Performance Computing Research Department at Lawrence Berkeley National Laboratory (LBNL), has been named recipient of the 2008 SACNAS Distinguished Scientist Award. The award will be presented during the 2008 SACNAS National Conference in Salt Lake City, Utah, on Thursday, October 9. The mission of SACNAS (Society for Advancement of Chicanos and Native Americans in Science) is to encourage Chicano/Latino and Native American students to pursue graduate education and obtain the advanced degrees necessary for science research, leadership, and teaching careers at all levels. The organization is celebrating its 35th anniversary this year. For more information, go to http://www.sacnas.orgExternal link.
Contact: Juan Meza, JCMeza@lbl.gov
 
Sandia Researcher Will Be Plenary Speaker at Upcoming SIAM Annual Meeting
Karen Devine, a researcher in Sandia's Scalable Algorithms Department and a co-investigator in SciDAC's CSCAPES institute and ITAPS center, will deliver an invited plenary talk at the SIAM annual meeting in San Diego, July 7-11. Dr. Devine's talk, entitled "Software Design for Scientific Applications," will focus on the conflicting challenges between the needs for agility in research and production-quality software development. She will draw examples from the Zoltan and Trilinos toolkits and Sandia's Rapid Production Development components. Dr. Devine also organized (with Sandian Mike Heroux) a related minisymposium including speakers from industry, academia, and the national laboratories.
Contact: Scott Collis, sscoll@sandia.gov
 
LLNL's Chandrika Kamath Co-Chairs the SIAM Data Mining Conference
Lawrence Livermore National Laboratory (LLNL) researcher Chandrika Kamath co-chaired the steering committee for the 2008 SIAM Conference on Data Mining held in Atlanta, April 24-26, with responsibility for selecting and managing the team which put the conference together. The SIAM Data Mining Conference provides a venue for researchers who are addressing problems in extracting knowledge from large, complex, and often noisy datasets. More information can be found at http://www.siam.org/meetings/sdm08External link. Kamath is also one of three founding editors-in-chief of the Wiley journal Statistical Analysis and Data Mining. The first issue was published in February 2008.
Contact: Chandrika Kamath, kamath2@llnl.gov
 

FACILITIES/INFRASTRUCTURE:

Argonne's Blue Gene/P Named World's Fastest for Open Science, Third Overall
Argonne National Laboratory's IBM Blue Gene/P high performance computing system is now the fastest supercomputer in the world for open science, according to the semiannual TOP500 List of the world's fastest computers. The TOP500 ListExternal link was announced on June 18 during the International Supercomputing Conference in Dresden, Germany.
 
The Blue Gene/P - known as Intrepid and located at the Argonne Leadership Computing Facility (ALCF) - also ranked third fastest overall. The Blue Gene/P has a peak-performance of 557 teraflops (557,056,000 million calculations per second). Intrepid achieved a speed of 450.3 teraflops on the Linpack benchmark used to measure speed for the TOP500 rankings.
Contact: Angela Hardin, ahardin@anl.gov
 
Network Speed and Capacity Boosted for Princeton Labs
A project is under way to significantly boost the network speed and capacity for scientists in different research labs within Princeton University. ESnet is working with partners to build a 10-gigabit-per-second network that will replace the 45-megabit-per-second one between ESnet's main network and the Princeton Plasma Physics Laboratory (PPPL). The new network also will benefit high energy physics researchers and climate modelers at the Geophysical Fluid Dynamics Laboratory (GFDL), both of which also are located on Princeton University's Forrestal Campus. With the availability of cutting-edge instruments and supercomputers, the researchers are able to carry out larger experiments that also produce a tremendous amount of data. As a result, they require a more robust network to easily send and receive data.
 
"By providing a larger, dedicated link, ESnet will greatly enhance our ability to share our data and to collaborate with other scientists," said Jeff Flick, network engineer and security officer at GFDL. "This new network represents an important enhancement to NOAA's GFDL IT infrastructure."
 
ESnet is working with MAGPI (Mid-Atlantic Gigapop in Philadelphia for Internet2), a nonprofit optical network developer to set up the new network from Washington, D.C. to the campus, where an ESnet router will distribute data among PPPL, GFDL, and high energy physics researchers. The project began in May, and it is scheduled for completion before October this year.
Contact: Joe Burrescia, JHBurrescia@lbl.gov
 
Sandia Demonstrates Quad-Core Catamount on Oak Ridge LCF's Jaguar System
Sandia has completed the Catamount XT4 Risk Mitigation project by successfully running a quad-core version of the Catamount Light Weight Kernel Operating System on ASCR's largest Cray XT4 system, Jaguar, located at the Oak Ridge Leadership Computing Facility. Four applications (GTC, VH1, POP, and AORSA) were tested at various job sizes for 24 hours. On average the Catamount performance was 3.8% better than Compute Node Linux. The percent improvement of Catamount over Compute Node Linux varied from - 14% to 44%. In all cases, Catamount outperformed Compute Node Linux for the tests involving the higher core counts.
Contact: Sue Kelly, smkelly@sandia.gov
 
Oak Ridge LCF Launches New File Management System
Oak Ridge LCF users will find it easier to manage the data files from their calculations with the advent of a new center-wide shared file system that saves all files to one location. "Spider" (as it has been named) has entered its "capacity-oriented" phase, meaning that while storage is available, the system is not yet at full speed. When operating at peak performance, it will replace multiple islands of file systems in various locations on the Oak Ridge LCF network with a single scalable system that will serve all the Oak Ridge LCF systems and will connect to the InfiniBand and Ethernet internal networks. Because all simulation data will eventually reside on Spider, users will not need to transfer files among multiple computers and data management systems.
 
By the end of 2008, Spider will have been expanded to support the petaflop computer, which will use the new file management system exclusively and will have no local scratch files. At that point, Spider will provide 10 PB of storage space and over 200 GB per second of bandwidth and will be mounted on all major Oak Ridge LCF systems. Deployment of Spider will be a welcome development for researchers. Having a single repository of simulation data will increase their productivity, allowing them to spend more time pursuing their research goals. By simplifying the use of the data analysis and visualization tools, it may encourage more researchers to take advantage of them and thus increase the value of their data.
Contact: Jayson Hines, hinesjb@ornl.gov
 
Nuclear Physics Workshop Provides Forum for Networking Discussions
Scientists and program managers attended a DOE workshop in May to identify the ESnet networking requirements for nuclear physics research. Sponsored by the DOE Office of Science, the Nuclear Physics Network Requirements WorkshopExternal link provided a forum to communicate with ESnet about the ways in which scientists from the nuclear physics research program use the network. ESnet will incorporate the feedback into its infrastructure and service planning processes.
 
"We don't ask the scientists what their network requirements are, but what their science process is - how they use the network and move their data," said Eli Dart, an ESnet engineer who organized all three workshops. "The workshop goals are to understand how the scientists use the network, and how their usage will change over time."
 
The workshop is part of the ESnet governance structure, and is designed to help ensure that ESnet meets the needs of researchers from all six program offices within the Office of Science in the immediate future as well as for years to come. ESnet held workshops for scientists in Basic Energy Sciences (BES) and Biological and Environmental Research (BER) programs last summer. A workshop for the Fusion Energy Sciences program took place in March this year. Each year, ESnet runs workshops for two program offices. The final reports have been posted for the BES workshopExternal link and the BER workshopExternal link.
Contact: Eli Dart, eddart@lbl.gov
 

OUTREACH:

ASCR Workshop Addresses Analysis of Petascale Data
The ASCR Applied Mathematics Program sponsored a workshop on Mathematics for Analysis of Petascale Data (MAPD) from June 3-5 in Rockville, MD. The goal of this workshop was to engage mathematical scientists and applications researchers to define a research agenda for developing the next-generation mathematical techniques needed to meet the challenges posed by petascale data sets. Specific objectives were to understand the needs of various scientific domains, delineate appropriate mathematical approaches and techniques, determine the current state of the art in these approaches and techniques, and identify the gaps that must be addressed to enable the effective analysis of large, complex data sets in the next five to ten years.
 
In attendance were 60 scientists and mathematicians, with expertise distributed across ten application domains and six mathematical and data analysis disciplines. It was, by all accounts, an energetic and engaging meeting, despite an intense storm and loss of all power for the last two hours of the workshop. The workshop agenda and presentationsExternal link are available for viewing and download and a workshop report is expected to be available from the ASCR web site by the end of July.
Contact: Philip Kegelmeyer, wpk@sandia.gov
 
ASCR Researchers Participate in Workshop on Scalable Applications
From June 3-5, Sandia's Computer Science Research Institute hosted a workshop, "Next-Generation Scalable Applications: When MPI-Only Is Not Enough," bringing together experts from labs, universities, and industry to discuss the state of scalable computing and future directions. The workshop allowed experts from all areas of computing - architectures, programming models, libraries and applications - to discuss the key issues facing scalable computing for science and engineering in a time of rapid change in the field. The workshop website containing copies of the presentations is available at http://csri.sandia.gov/Workshops/2008/NextGenerationScalableAppsExternal link, and a workshop report will be posted there in the near future.
Contact: Mike Heroux, maherou@sandia.gov
 
SciDAC Workshop Focuses on Combinatorial Scientific Computing
Sandia's Computer Science Research Institute hosted a workshop on Combinatorial Scientific Computing and Petascale Simulations 2008 (CSCAPES) in Santa Fe on June 10-13. CSCAPES (pronounced "seascapes") is a SciDAC Institute established to address the challenge of harnessing the potential of high-end computers in solving complex scientific problems. Thirty-five researchers from various backgrounds attended: CSCAPES scientists, SciDAC application scientists, academic collaborators, and representatives from industry. The workshop included tutorials on topics such as load balancing and automatic differentiation. Important goals of the workshop were to foster new interactions and to prioritize future research directions. More information on the workshop is available at http://www.cs.sandia.gov/CSRI/Workshops/2008/CSCAPESExternal link. Similar workshops are planned for future years.
Contact: Erik Boman, egboman@sandia.gov
 
Fermilab Hosts National Laboratories Information Technology (NLIT) Summit
Fermilab hosted this year's National Laboratories Information Technology (NLIT) Summit from May 11-14 in Chicago. The NLIT Summit brings together representatives from across the DOE complex to facilitate an exchange of information technology (IT) best practices and ideas. This helps strengthen the IT infrastructure and provides cost efficiencies by avoiding the reinvention of solutions to similar problems. This year's summit was sponsored by the NLIT Society through the participation of over sixty IT vendors and assistance from the Federal Business Council, Inc.
 
Two hundred and fifty people from across the DOE national laboratory community attended the event, which focused on IT interest areas such as enterprise architecture and IT governance, ITIL and ISO20000 experiences, green computing, unclassified computer security tools, open source applications showcase, helpdesk, servicedesk and issue tracking.
 
The Summit was chaired by Mark Kaletka. Vicky White, Head of the Fermilab Computing Division, welcomed attendees and vendors to the summit. Keynote addresses for each day of the summit were presented by Ruth Pordes, Executive Director of the Open Science Grid; Scott Studham, CIO of Oak Ridge National Laboratory; and Chuck Powers, Manager of the National Renewable Energy Laboratory's IT Infrastructure and Operations Group. InDiCo, the CERN- and Fermilab-developed meeting and conference tool, was used for the conference registration, contributions, and agenda. For further information and to download presentations of interest, visit http://www.nlit08.orgExternal link. Next year's 2009 NLIT Summit will be hosted by Oak Ridge National Laboratory from May 31 to June 3 in Knoxville, Tennessee.
Contact: David Ritchie, ritchie@fnal.gov
 
LBNL to Host Conference on Computational Methods in Water Resources
The XVII International Conference on Computational Methods in Water Resources (CMWR 2008) will be held July 6-10 in San Francisco. Lawrence Berkeley National Laboratory will be the host and a major player, with 60 scientists from the lab's Earth Sciences, Computational Research, and Physical Biosciences divisions contributing to presentations. Contributing authors and presenters also include 62 researchers from other DOE laboratories, including Idaho, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest, Sandia, and Savannah River national laboratories, as well as scientists from universities and government agencies around the world. More information on CMWR 2008 is available at http://esd.lbl.gov/CMWR08External link.
 
Upcoming ALCF and Blue Gene Consortium Workshops
Argonne will host a Leap to Petascale Workshop on July 29-31 at the Argonne Leadership Computing Facility (ALCF). Attendees will learn about the petascale resources available. Then, IBM and ALCF performance engineers will help them scale and tune their applications on 40 racks of Blue Gene/P. Sponsors include the ALCF, Blue Gene Consortium, and Argonne National Laboratory.
Contact: Chel Lancaster, lancastr@alcf.anl.gov
 
The Blue Gene Consortium Open Source Workshop on August 12-13 at Argonne will provide consortium members with an understanding of the BG/P open source community organization and business model, cover ongoing and potential research activities, describe IBM's involvement, and brainstorm on future activities. Sponsors include the BG Consortium, Argonne Mathematics and Computer Science Division, ALCF, and IBM.
Contact: Ed Jedlicka, jedlicka@mcs.anl.gov
 
Oak Ridge LCF's Jaguar Adds Style to "Kung Fu Panda"
Supercomputing is a long way from the glitz and glamour of Tinseltown, but not as far as one may think. Take DreamWorks Animation's latest blockbuster "Kung Fu Panda" for example, which recently benefited from the Oak Ridge LCF's Jaguar supercomputer. A research group from DreamWorks used Jaguar to develop image generation algorithms. "They had previously been rendering in batch mode, with lots of animators working with low resolution models to try things out," explained Sean Ahern of the Oak Ridge LCF. The team then had to pick their favorites and render higher resolution images overnight, which was very time consuming. To accelerate this process and reflect changes to the model in real time, DreamWorks researcher Evan Smith wanted to come up with a new breed of image-generation algorithms that could take advantage of multiple processing cores.
 
Whereas scientists use visualization for increased understanding of data, animators use it for entertainment. "They don't do it to help you understand the science, they do it to tell the story," Ahern said, adding, "but they drive realistic image generation." Realism in image generation comes down to one thing - light. Getting the play of light right - its behavior as it reflects, refracts, and diffuses - turns out to be the key factor in making a computer-generated image appear real. The calculations involved in ray tracing, or tracing the theoretical path of light rays as they interact with the surfaces of the model, make image generation "incredibly computing intensive," said Ahern.
Contact: Jayson Hines, hinesjb@ornl.gov
 
ORNL and Oak Ridge High School Team Up to Find Global Temperature
What is the earth's global temperature. After all, different geographies have starkly different climates, so how does one compute a "global" temperature given the cold of Alaska and the heat of the Sahara, with all of the various other climates in between. That is the question Oak Ridge High School (ORHS) graduating seniors Casey Jaeger and Helen Ren chose for their 2007-08 Math Thesis project for ORHS mathematics teacher Benita Albert. Jaeger's research ultimately led him to the Cray XT4 teraflops computer, named Jaguar, at the at the Oak Ridge Leadership Computing Facility (LCF). Both students worked with mentor John Drake, group leader of the Computational Earth Sciences Group, in the Computer Sciences and Mathematics Division at ORNL.
 
"Because the temperature measured at one place will differ from the temperature measured at another," said Drake, "we have to ask, how can we express that, and we have mathematical language we can use." Jaeger began doing his calculations on his laptop, but their complexity required more computing muscle, such as that available on Jaguar. When Jaeger began sampling multiscale functions, the laptop's processing took more than 25 hours to derive a simple average, so he and Drake created a version of his algorithm that would run on a parallel computing system. "This is an example of a simple question with a complicated answer," Drake said. "The whole global warming debate assumes we have a clear answer. This kind of research puts kids in the position to question assumptions and to examine firsthand the scientific premises underlying an important issue."
Contact: Jayson Hines, hinesjb@ornl.gov
 
Oak Ridge LCF Introduces Supercomputing to Summer Students
Oak Ridge's Leadership Computing Facility recently hosted a "Supercomputing Crash Course" to familiarize summer students with the science of high-performance computing. Approximately 60 students and faculty members attended two workshops, hosted by LCF staff members Arnold Tharrington and Rebecca Hartmann-Baker. The workshops aimed both to educate the summer students already involved in computational science and to gauge the interests of students who are new to supercomputing.
 
An overview of the UNIX operating system was a major theme of the workshops, as they were aimed at students with little to no UNIX experience. The instructors also briefly discussed MPI, demonstrating simple MPI programs on the LCF's 263-teraflop Jaguar supercomputer, one of the world's fastest high-performance computing systems. "The main objective [of the MPI workshop] was to get the students to understand the MPI programming model," said Tharrington. "It was a great experience for both the students and the instructors," added Hartmann-Baker. "I look forward to conducting similar workshops in the future."
Contact: Jayson Hines, hinesjb@ornl.gov
 
PNNL Seminar Features Microsoft VP Talk on Smart Cyberinfrastructure
Pacific Northwest National Laboratory hosted Dr. Tony Hey, Corporate Vice President of the External Research Division of Microsoft Research, as the inaugural speaker of the Frontiers in Computational & Information Sciences Seminar held June 2 at the Pacific Northwest National Laboratory. The new seminar series features invited speakers from industry, universities, and government to discuss innovations and advancements in the computer sciences.
 
Dr. Hey's presentation, "eScience, Semantic Computing and the Cloud: Towards a Smart Cyberinfrastructure for eScience," explored the idea and need for semantic-oriented computing technologies and showcased eScience projects that have successfully applied such technologies to facilitate and enhance information sharing and discovery. "Most of the challenges of the future will have to do with the data - navigation and visualization will help to make sense of this," Hey said. His presentation can be downloaded from http://www.pnl.gov/computing/highlights/pdf/hey_presentation.pdfExternal link

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:48 AM