Ames National Laboratory

Coordinator: Peng Xu

Review abstracts for current and past practicum experiences at Ames National Lab >>

Ames National Laboratory established the Scalable Computing Laboratory as part of its Applied Mathematical Sciences program in 1989. The research done in the SCL has concentrated on efficient use of highly parallel computers that scale, applications drawn from physics, chemistry, and materials science work done at Ames, and novel approaches to performance analysis. The central goal is to find ways that computers can complement experimental and theoretical approaches to exploring the physical sciences.

Performance Analysis

Conventional methods of measuring computer speed have little use when working with highly parallel computers, or with a broad performance spectrum. The SCL at Ames has developed tools that measure performance in a manner that better relates to real world of problem solving by using answer quality instead of time reduction as the figure of merit. The HINT program, now supported at Brigham Young University, has successfully benchmarked computers spanning the entire range of existing speeds and architectures. HINT uses a hierarchical integration paradigm that provides a mathematically sound comparison of computational performance even when the algorithm, computer and precision are changed. HINT establishes a performance metric firmly grounded in physical and informational-theoretic fundamentals. This work has won a prestigious two R&D 100 awards shared by students, post-docs and principal investigators. Research is continuing towards the goal of predicting a priori the performance of any real or paper computer.

Performance Tools for Extended Use

Improving the performance of large application codes is frequently more art than science. The SCL is working to change this by developing standard tools for the measurement of program behavior, integrating self-measurement with application programs and analyzing prior behavior with current information. One of the most common problems with current production application codes is a lack of instrumentation. By changing this approach the SCL scientists will know not only how the code is doing, but whether it was being used properly in the first place. This approach to automated global instrumentation frequently points to details omitted in the original specification analysis.

Immortal Code

Going beyond the Tools for Extended Use project is the Immortal Code project whose goal is nothing less than a complete restructuring of the entire software development cycle. The idea is to negotiate a series of contracts between the end user, the application programmer, the system software developer and the hardware designer. The end user worries about the cost of computing, how much time it can take, what the input is and how accurate and reliable the answer should be but not how to solve the problem. The application programmer takes this information and decides how this can be done but not on which system. The system software developer worries about what an application programmer might ask and how to do it with his hardware. Finally the hardware designer worries about minimizing costs and maximizing performance of the hardware without regard to anything else. Each stage requires negotiation and reasonableness from both parties. The result is a cleaner development cycle and the prospect that application codes can last for decades instead of years.

Hierarchical Methods

A problem that comes up in almost every physical simulation is the increase in the work scales much more rapidly than the accuracy or level of detail being computed. A new family of algorithms have been implemented in the last decade that have drastically reduced the complexity of n-body simulations such as radiosity. Shown by SCL researchers to be of order n(log n)4 (rather than order n as suggested by Greengard) they are still a significant improvement over previous order n3 methods. One goal is to apply these methods to the radar cross-section problem. The SLALOM project is an example of hierarchical methods applied graphical rendering. SLALOM has now been succeeded by Photon.

Photon

Photon is a unique approach to computer graphics. Unlike many rendering tools, which trace the light rays backwards from the viewpoint, Photon uses a Monte Carlo approach to model the transport of light from the light sources throughout the scene. Consequently, Photon is able to perform a complete, high speed solution to Kajiya’s rendering equation. The use of hierarchical rendering generates a recognizable scene quickly and steadily improves the quality to reality. The result is a physically correct (polarization and fluorescence effects are included), view-independent scene suitable for a walkthrough. The SCL is developing a number of scenes suitable for walkthroughs.

Genetic Algorithm for Structural Optimization

Atomistic models of materials provide accurate total energies. Practical applications, however, often require extremely long time scale simulations. Structural optimization of an atomic cluster requires a simulated annealing run whose length scales exponentially with the number of atoms in the cluster. The Condensed Matter Physics Group, with the help of the SCL, has developed a genetic algorithm.