2012

February 2012

ASCR Monthly Computing News Report - February 2012



ASCR Monthly Computing News Report-February 2012

This monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia national labs.

In this issue:

Special Announcement: INCITE Program Provides Access to Supercomputing Resources

Research News
New Mathematical Method Reveals Where Genes Switch On or Off
ANL Researchers Release New Software Kit for Solving Large-Scale Optimization Applications
LBNL Researchers Develop Tools for Identifying Effective Carbon Capture Technologies
Argonne Reports on Finding Functionals for Fission
Researchers Use NERSC to Plot a Roadmap for Engineering Piezoelectricity in Graphene
PNNL Researchers Scaling Up Codes to Meet Power Grid Real-Time Requirements

People
Berkeley Mathematicians Sethian, Saye Receive 2011 Cozzarelli Prize
LBNL's John Shalf's Paper Named among the Best in History of HPDC Conference
Argonne's Sven Leyffer Coedits New Book on Mixed Integer Nonlinear Programming
Energy Secretary Chu Visits ORNL
Sandia Researcher Bochev Visits Oberwolfach to Discuss Optimization Work
Berkeley's Kathy Yelick Invited Speaker at NITRD 20th Anniversary Symposium

Facilities/Infrastructure
ORNL Completes First Phase of Titan Supercomputer Transition
Berkeley Lab Breaks Ground for New Computational Research Facility

Outreach and Education
Berkeley Lab Staff Mentor High School Girls in Science Education App Development
Washington, D.C. Symposium to Highlight Science Enabled by Hybrid Supercomputing
Workshop Prepares HPC Users for Titan
Workshop Informs Attendees about Innovative Hardware and Methods using Accelerators

Special Announcement: INCITE Program Provides Access to Supercomputing Resources

Open to researchers from academia, government labs, and industry, the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) Program is the major program for gaining access to Leadership Computing Facilities at Argonne and Oak Ridge national laboratories. INCITE aims to accelerate breakthrough science by awarding, on a competitive basis, time on these facilities' supercomputers. For more information about INCITE, see http://www.doeleadershipcomputing.org/External link.

For more information about proposal preparation, join one of two "Preparing for the 2013 INCITE Call for Proposal" webinars. The sessions will provide both prospective and returning users the opportunity to get specific answers to questions about the proposal and review process for INCITE. A question and answer period will follow the presentation.

Prospective INCITE proposal PIs are invited to respond to a Request for Information by April 6 to inform the INCITE management of their proposal topics. See http://hpc.science.doe.gov/allocations/incite/ for details. The information requested is not a prerequisite for proposal submittal, nor will it limit any requests made in your INCITE proposal.

Research News

New Mathematical Method Reveals Where Genes Switch On or Off

Developmental biologists at Stanford University, using computing resources at the U.S. Department of Energy's National Energy Research Scientific Computing Center (NERSC), have taken a new mathematical method used in signal processing and applied it to biochemistry, using it to reveal the atomic-level details of protein-DNA interactions with unprecedented accuracy. They hope this method, called "compressed sensing," will speed up research into where genes are turned on and off, and they expect it to have applications in many other scientific domains as well.

All of the cells in an individual human body contain exactly the same DNA, which provides the blueprint for building and operating a complete organism. Yet cells in different organs have different structures and perform different functions. "What distinguishes between a cell in the eye as opposed to a cell in the elbow are the subset of genes that are expressed in those cells, and what determines that are the transcription factors-they're the programming of the cell," says Mohammed AlQuraishi, a postdoctoral researcher in the Developmental Biology Department at Stanford.

Transcription factors are proteins that bind to specific DNA sequences to either activate or block the expression of a gene There are approximately 2000 human transcription factors, making them the single largest family of human proteins. Understanding exactly where transcription factors bind with DNA could help answer many crucial questions in biology. But determining this experimentally requires significant labor and financial resources, so a computational solution is highly desirable.
Read more.External link

ANL Researchers Release New Software Kit for Solving Large-Scale Optimization Applications

Researchers seeking to solve large-scale optimization problems on high-performance architectures have faced numerous challenges, ranging from scattered support for parallel computation and lack of reuse of linear algebra software to the reality of working with large, often poorly structured legacy codes for specific applications. The Toolkit for Advanced Optimization (TAO) developed at Argonne National Laboratory is designed to address these challenges.

The latest release, TAO 2.0, provides several new algorithms, including POUNDERS, for solving nonlinear least squares problems when no derivatives are available and function evaluations are expensive, and LCALM, for solving optimization problems with partial differential equation constraints based on a linearly constrained augmented Lagrangian method. Also included is the capability for any of the TAO line search methods to be selected regardless of the overlying TAO algorithm. Moreover, users can create new line search algorithms that may be more suitable for their applications. TAO is built on top of the PETSc framework to enable reuse of external tools, and several changes have been made in the new TAO release to achieve a tighter association with PETSc design principles. Moreover, TAO no longer has separate abstract classes; rather, the PETSc objects are now used directly, making TAO applications much easier to create and maintain for users familiar with PETSc programming.

TAO is suitable for both single-processor and massively parallel architectures. Recent applications using TAO include time-dependent density functional theory for quantum chemistry, laser-induced thermotherapy in biomedical engineering, image classification in machine learning, two-Skyrmion interactions in physics, and crack formation in materials science.

The TAO 2.0 Users Manual (ANL-MCS-TM-322, January 20, 2012), as well as information about applications, publications, and licensing the open source software, is available at the TAO website, http://mcs.anl.gov/taoExternal link.

LBNL Researchers Develop Tools for Identifying Effective Carbon Capture Technologies

About half of the electricity used in the United States is produced by coal-burning power plants that spew carbon dioxide (CO2) into the atmosphere. To reduce this effect, many researchers are searching for porous materials to filter out the CO2 generated by these plants before it reaches the atmosphere, a process commonly known as carbon capture. But identifying these materials is easier said than done.

"There are a number of porous substances-including crystalline porous materials, such as zeolites, and metal-organic frameworks-that could be used to capture carbon dioxide from power plant emissions," says Maciej Haranczyk, a scientist in the Lawrence Berkeley National Laboratory's (LBNL's) Computational Research Division.

In the category of zeolites alone, Haranczyk notes that there are around 200 known materials and 2.5 million structures predicted by computational methods. That's why Haranczyk and colleagues have developed a computational tool that can help researchers sort through vast databases of porous materials to identify promising carbon capture candidates-and at record speeds. They call it Zeo++. By using Zeo++, researchers have already sifted through one such database of millions of materials and have identified a few that could outperform current technologies.

The tool, he notes, works not by simulating each atom of a material, but by mapping what isn't there: the voids in the materials. Download the Zeo++ database.External link
Read more.External link

Argonne Reports on Finding Functionals for Fission

Under the DOE-funded Universal Nuclear Energy Density Functional (UNEDF) SciDAC collaboration, researchers have been conducting a study of nuclear fission, based on nuclear density functional theory (DFT) and its extensions. The goal is to deliver fission models capable of providing nuclear data not only of a high quality but also with quantified uncertainties. The quality of a DFT calculation relies on the form and parameterization of an underlying energy density functional. The challenge is to obtain an optimal fitting with a minimal number of runs. Researchers at Argonne National Laboratory now have successfully met this challenge, publishing their results in Physical Review C.

Building on earlier optimization results (UNEDF0), the researchers enlarged the dataset by adding ground-state masses of three deformed actinide nuclei and excitation energies of fission isomers in three nuclei. Using the Argonne-developed code POUNDERS, Practical Optimization Using No Derivatives (for Squares), the researchers then ran 218 simulations for each nucleus in the dataset, using 80 compute nodes on Argonne's Laboratory Computing Resource Center cluster, for a total of 5.67 hours. Good agreement was achieved with the experimental data on masses and separation energies. Moreover, the POUNDERS optimization required 10 times fewer runs than are needed with traditional methods; a similar optimization could previously have consumed a month of computations. The most striking feature of the new UNEDF1 functional, however, is its ability to reproduce the empirical fission barriers in the actinide region: the quality of UNEDF1 predictions for inner and outer fission barriers is comparable to that obtained in more phenomenological models. Indeed, UNEDF1 gives a much-improved description of the fission barriers in Pu-240 and neighboring nuclei.

For a full description of this work, see the following: M. Kortelainen, J. McDonnell, W. Nazarewicz, P.-G. Reinhard, J. Sarich, N. Schunck, M.V. Stoitsov, and S.M. Wild, "Nuclear energy density optimization: Large deformations," Physical Review C, 85 (2), pp. 024304, 2012 (http://prc.aps.org/abstract/PRC/v85/i2/e024304External link).

Researchers Use NERSC to Plot a Roadmap for Engineering Piezoelectricity in Graphene

Some scientists refer to graphene as the "miracle material" of the 21st century. Composed of a single sheet of carbon atoms, this material is tougher than diamond, more conductive than copper, and has potential applications in a variety of technologies. Now with the help of supercomputers at NERSC, researchers at Stanford University have uncovered yet another hidden talent-with a little chemical doping, graphene can also be transformed into a controllable piezoelectric material. This "engineered piezoelectricity" could lead to new devices like nanoscale chemical and acoustic sensors.

"Our results revealed for the first time a designer piezoelectric phenomenon, unique to the nanoscale, which could bring dynamical control to nanoscale electromechanical devices," says Mitchell Ong, a postdoctoral researcher at Stanford University. Ong and Stanford University Professor Evan Reed co-authored a paper about the phenomenon in a recently published issue of ACS Nano.External link
Read more.External link

PNNL Researchers Scaling up Codes to Meet Power Grid Real-Time Requirements

Pacific Northwest National Laboratory (PNNL) researchers Patrick Nichols, Steve Elbert and Henry Huang have been developing methods to estimate the state of the generators and buses in the power grid at unprecedented speed using high performance computing. This is important, because to prevent blackouts and other electrical power failures, analysis must be performed over continental scales to ascertain the state of the power system so that corrective measures can be triggered within less than a second. Measurements of the individual components can arrive every 30 milliseconds (0.03 seconds) to an engineer in the control room. These measurements can be used to estimate the state of the electrical grid using a method known as the Kalman filter and thus allow corrective measures to be taken. However, in order to be useful, these calculations need to be completed at nearly the same rate that the measurements are arriving (30 milliseconds). Each calculation can take several hundred teraflops (1014 floating point operations).

The only feasible means of performing these calculations in the time frame allowed is using thousands of processors working as a team to complete the task. This requires special software and a large computing facility. Using Global Arrays (a PNNL-developed programming model) and high performance computing software such as LAPACK, Atlas, and MPI, the team has effectively linked several thousand cores on the Olympus supercomputer, the new computing facility at PNNL. They can perform these calculations in the allotted time for a system equivalent to the size of California. However, for a system such as the WECC (the Western Electricity Coordinating Council, comprised of the Western half of the United States as well as parts of Canada and Mexico), the calculations can take approximately 10 seconds. They are now working to speed up the code to allow these calculations to be performed within the required time interval using several hundred thousand cores.

Contact: Zhenyu Huang, (zhenyu.huang@pnnl.gov) and Greg Welch, (welch@cs.unc.edu)

Climate Scientists Compute in Concert

Researchers at Oak Ridge National Laboratory (ORNL) are sharing computational resources and expertise to improve the detail and performance of a scientific application code that is the product of one of the world's largest collaborations of climate researchers. The Community Earth System Model (CESM) is a mega-model that couples components of atmosphere, land, ocean, and ice to reflect their complex interactions. By continuing to improve science representations and numerical methods in simulations, and exploiting modern computer architectures, researchers expect to further improve the CESM's accuracy in predicting climate changes.

"Climate is a complex system. [We're] not solving one problem, but a collection of problems coupled together," said ORNL computational Earth scientist Kate Evans. Of all the components contributing to climate, ice sheets such as those covering Greenland and Antarctica are particularly difficult to model-so much so that the Intergovernmental Panel on Climate Change (IPCC) could not make any strong claim about the future of large ice sheets in its 2007 Assessment Report, the most recent to date.

Evans and her team began the Scalable, Efficient, and Accurate Community Ice Sheet Model (SEACISM) project in 2010 in an effort to fully incorporate a three-dimensional, thermomechanical ice sheet model called Glimmer-CISM into the greater CESM. The research is funded by the Department of Energy's (DOE's) Office of Advanced Scientific Computing Research (ASCR). Once fully integrated, the model will be able to send information back and forth among other CESM codes, making it the first fully coupled ice sheet model in the CESM.

Evans said the team is on track to have the code running massively parallel by October of 2012. Currently, simulations of a small test problem have employed 1,600 of Jaguar's 224,000 processors. Evans said the team expects that number to expand substantially in the near future when they begin simulating larger problems with greater realism.

Facilities/Infrastructure ORNL Completes First Phase of Titan Supercomputer Transition

The OLCF's Jaguar supercomputer has completed acceptance testing of the first phase of an upgrade that will keep it among the most powerful scientific computing systems in the world. The testing suite included leading scientific applications focused on molecular dynamics, high-temperature superconductivity, nuclear fusion, and combustion. When the upgrade is completed this autumn, the system will be renamed Titan and will be capable of 10 to 20 petaflops. Users have had access to Jaguar throughout the upgrade process. "We have already seen the positive impact on applications, for example in computational fluid dynamics, from the doubled memory," said Jack Wells, director of science at the OLCF.

The DOE Office of Science-funded project, which ended ahead of schedule, upgraded Jaguar's AMD Opteron cores to the newest 6200 series and increased their number by a third, from 224,256 to 299,008. Two six-core Opteron processors were removed from each of Jaguar's 18,688 nodes and replaced with a single 16-core processor. At the same time, the system's interconnect was updated and its memory was doubled to 600 terabytes. In addition, 960 of Jaguar's 18,688 compute nodes now contain an NVIDIA graphical processing unit (GPU). The GPUs were added to the system in anticipation of a much larger GPU installation later in the year. The GPUs act as accelerators, giving researchers a serious boost in computing power in a far more energy-efficient system.

Berkeley Lab Breaks Ground for New Computational Research Facility

Department of Energy Secretary Steven Chu, along with Lawrence Berkeley National Laboratory (Berkeley Lab) and University of California leaders, broke ground on the Lab's Computational Research and Theory (CRT) facility, Wednesday, Feb. 1. The CRT will be at the forefront of high-performance supercomputing research and will be DOE's most efficient facility of its kind.

Joining Secretary Chu as speakers were Berkeley Lab Director Paul Alivisatos, University of California President Mark Yudof, Energy Department's Office of Science Director Bill Brinkman, and UC Berkeley Chancellor Robert Birgeneau. The festivities were emceed by Associate Lab Director for Computing Sciences Kathy Yelick. The CRT will include an approximately 140,000 gross-square-foot, $145 million computer facility and office structure and associated infrastructure. The facility will accommodate up to approximately 300 staff and bring together the National Energy Research Scientific Computing Center (NERSC), the Energy Sciences Network (ESnet) and the Computational Research Division.
Learn more.External link

People

Berkeley Mathematicians Sethian, Saye Receive 2011 Cozzarelli Prize

James Sethian and Robert Saye, mathematicians who both hold joint appointments at Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley, have won the 2011 Cozzarelli Prize for the best scientific paper in the category of Engineering and Applied Sciences. Their winning paper, "The Voronoi Implicit Interface Method for computing multiphase physics," introduces a robust, accurate and efficient numerical method for tracking large numbers of interacting and evolving regions (phases) whose motions are determined by complex interactions of geometry, physics, constraints and internal boundary conditions.

The Cozzarelli Prize is sponsored by the Proceedings of the National Academy of Sciences (PNAS) to acknowledge papers published during the year in PNAS that demonstrate the highest scientific excellence and originality. The annual award was established in 2005 and named the Cozzarelli Prize in 2007 to honor late PNAS Editor-in-Chief Nicholas R. Cozzarelli. Six prizes are awarded in the six broadly defined classes under which the National Academy of Sciences is organized.
Read more.External link

LBNL's John Shalf's Paper Named among the Best in History of HPDC Conference

"The Cactus Code: A Problem Solving Environment for the Grid," a paper co-authored by John Shalf of LBNL's Computational Research Division, has been selected as one of the top papers in the 20 years of publications from HPDC, the International ACM Symposium on High-Performance Parallel and Distributed Computing. Other authors of the paper, written in 2000, are Gabrielle Allen, Werner Benger, Tom Goodale, Hans-Christian Hege, Gerd Lanfermann, Andr Merzky, Thomas Radke and Edward Seidel. Read more.External link

Argonne's Sven Leyffer Coedits New Book on Mixed Integer Nonlinear Programming

Sven Leyffer, a computational mathematician in Argonne's Mathematics and Computer Science Division, has collaborated with Jon Lee (University of Michigan) in coediting a new book titled "Mixed Integer Nonlinear Programming."

Many engineering and scientific applications include decision variables and nonlinear relationships involving those decision variables that can profoundly affect the set of feasible and optimal solutions. Mixed-integer nonlinear programming (MINLP) is a flexible modeling paradigm for optimization, and it thus has attracted an increasing number of researchers and practitioners - including chemical engineers, operations researchers, industrial engineers, mechanical engineers, economists, statisticians, computer scientists, operations managers, and mathematical programmers - interested in solving large-scale MINLP problems.

The new book comprises 22 chapters exploring topics ranging from algorithms and software for convex MINLPs to disjunctive programming, nonlinear programming, expression graphs, and combinatorial programming. Also discussed are the numerical difficulties of handling nonlinear functions, as well as applications including a benchmark library of mixed integer optimal control problems.

The book is published as vol. 154 in the series The IMA Volumes in Mathematics and Its Applications, 2012 (http://www.springer.com/mathematics/analysis/book/978-1-4614-1926-6External link).

Energy Secretary Chu Visits ORNL

U.S. Energy Secretary Steven Chu recently made a quick visit to ORNL, where he got a briefing on advanced computer simulations of nuclear energy and even took a turn experiencing the 3D environment of a virtual reactor's nuclear core. Chu's stop at ORNL was part of a daylong trip to underscore the Obama administration's support of nuclear energy.

While at the lab, Chu met with scientific leaders of the Consortium for Advanced Simulation of Light Water Reactors (CASL). CASL is one of the Department of Energy-sponsored Energy Innovation Hubs that are designed to produce revolutionary results in a relatively short time. The team uses some of the world's most powerful computers - including the Cray Jaguar system at ORNL - to drive advanced simulations and help address issues in the nuclear industry, such as how to get more power output from reactors, extend their life, and reduce the amount of waste.

OLCF's Project Director Buddy Bland also briefed Secretary Chu on the progress being made on Jaguar's upgrade to Titan, the OLCF's next flagship system, slated for 2013.

Sandia Researcher Bochev Visits Oberwolfach to Discuss Optimization Work

Sandian Pavel Bochev participated in a workshop on advanced computational engineering that took place in the Mathematical Research Institute (MRI), Oberwolfach, Germany in February 2012. The MRI, founded in 1944, is one of the top international research centers in mathematics. MRI workshops are small, invitation-only events, which focus on a cutting edge research topic and emphasize interaction and collaboration. Bochev was among one of the participants selected to talk about his ASCR-funded research in optimization-based modeling.

Contact: Pavel Bochev, pbboche@sandai.gov)

Berkeley's Kathy Yelick Invited Speaker at NITRD 20th Anniversary Symposium

Kathy Yelick, Berkeley Lab's Associate Laboratory Director for Computing Sciences, was one of 16 speakers invited to share their expertise at a Feb. 16 symposium marking the 20th anniversary of the Federal Networking and Information Technology Research and Development (NITRD) Program. Yelick gave a presentation on "More and Moore: Growing Computing Performance for Scientific Discovery." Other speakers on the program included Former Vice President Al Gore, who spearheaded the High-Performance Computing Act of 1991, David Keyes of Columbia University and the King Abdullah University of Science and Technology, and Vinton Cerf, one of the "fathers of the Internet" and a member of the ESnet Policy Board. Read more.External link

Outreach and Education

Berkeley Lab Staff Mentor High School Girls in Science Education App Development

Late Tuesday afternoons, as many Berkeley Lab employees are heading down the hill after work, a group of more than 60 high school girls from Berkeley and Albany heads up to the lab for a series of 10 two-hour workshops to develop science education apps for Android smart phones. Split into five-member teams, the girls are being mentored by 20 women who work at the Lab. The girls are tasked with coming up with their own ideas for an app related to science education, then vetting the idea by running it by potential users. Once they develop the app, they will also need to come up with a business plan and pitch their idea to a panel of judges on April 28. Judges will select one app to compete in a similar judging of winning apps from sessions held around the Bay Area and in other states. The winning app will be professionally developed and distributed on the Android Market.

The program is developed by Technovation ChallengeExternal link, which is a program of Iridescent, a non-profit organization dedicated to science and technology education. The Technovation Challenge aims to promote women in technology by giving girls the skills and confidence they need to be successful in computer science and entrepreneurship.
Read more.External link

Washington, D.C. Symposium to Highlight Science Enabled by Hybrid Supercomputing

ORNL, which operates the premier leadership computing facility for the U.S. Department of Energy Office of Science, is gathering top experts in science, engineering, and computing from around the world to discuss research advances that are now possible with extreme-scale hybrid supercomputers. The Accelerating Computational Science Symposium 2012External link (ACSS 2012) will be held March 28-30 in Washington, D.C. It will explore how hybrid supercomputers speed discoveries, such as deeper understanding of phenomena from earthquakes to supernovas, and innovations, such as next-generation catalysts, materials, engines, and reactors.

The hybrid architecture is the foundation of ORNL's "Titan" supercomputer, which will reach up to 20 petaflops of performance by the end of this year. Titan will be a groundbreaking new tool for scientists to leverage the massive power of hybrid supercomputing for new waves of research and discovery. Presenters at ACSS 2012, which will include experts from leading universities, national laboratories and supercomputing centers worldwide, will share recent advances enabled by hybrid supercomputers in chemistry, combustion, biology, nuclear fusion and fission, seismology, and other fields.

ACSS 2012 is co-hosted by the National Center for Supercomputing ApplicationsExternal link (NCSA), the Swiss National Supercomputing CentreExternal link (CSCS), Cray Inc. and NVIDIA.

Workshop Prepares HPC Users for Titan

ORNL is upgrading its Jaguar supercomputer to become Titan, a Cray XK6 that will be capable of 10 to 20 petaflops by early 2013. To prepare users for impending changes to the computer's architecture, OLCF staff held a series of workshops January 23 - 27. Attendees of the workshops-who could take part in person or virtually-started each day listening to lectures about the various performance tools available for scaling up their codes. The afternoons were spent putting these tools to use.

The first day focused on exposing parallelism, or performing multiples tasks simultaneously, in codes. Vendors explained their respective compiler technologies on the second day. The third day was devoted to performance analysis tools, and the last day of instruction focused on debuggers. The conference rounded out with an open session for user questions on the fifth day.

Representatives of software companies PGI, CAPS Enterprises, and Allinea as well as Cray, the company that built Jaguar and Titan, were on hand to discuss various tools being used to help transition computer codes to the new architecture. A representative from the Technical University of Dresden was also on hand to present information about the Vampir suite of performance analysis tools. OLCF user assistance specialist Bobby Whitten said the workshop had a great turnout, with 94 attendees on the first day alone.

Workshop Informs Attendees about Innovative Hardware and Methods using Accelerators

ORNL and the University of Tennessee's Joint Institute for Computational Sciences (JICS) hosted a workshop, " Electronic Structure Calculation Methods on AcceleratorsExternal link ," at ORNL February 5-8 to bring together researchers, computational scientists, and industry developers. The 80 participants attended presentations and training sessions on the advances and opportunities that accelerators bring to high-performance computing (HPC).

Several high-performance computers are being upgraded to include accelerators such as graphics processing units (GPUs). For example, ORNL's Jaguar recently received an extensive hardware installation of 960 NVIDIA Tesla 20-series GPUs, increasing its performance from 2.3 to 3.3 petaflops. Additional upgrades, scheduled for completion in fall 2012, will enable a peak performance between 10 and 20 petaflops as Jaguar transitions from a Cray XT5 machine into an XK6 and is renamed TitanExternal link. This architectural revolution also calls for user communities to create and optimize the software, algorithms, and theoretical models the new hardware demands.

The February workshop featured 22 speakers lecturing on the innovative hardware and the new hardware-accelerated electronic-structure codes. On the first day, staffers from ORNL and JICS and industrial partners such as NVIDIA, PGI, Cray Inc., CAPS enterprise, and Intel Corporation spoke about programming on accelerators and compilers. On the second day, computational scientists and academic researchers spoke about electronic-structure applications including ACES III, NWCHem, GAMESS, Quantum ESPRESSO, QMCPACK, FlapW, and TeraChem. On the final day, software authors from the Commissariat à l'Énergie Atomique led tutorial and hands-on training sessions about the GPU-based BigDFT program, a software code for electronic-structure calculations. A final summary session was also recorded and will be made available on the Oak Ridge Leadership Computing Facility websiteExternal link.

Last modified: 3/18/2013 10:12:25 AM