The Berkeley Institute for Performance Studies (BIPS) is the umbrella organization encompassing several research activities at Berkeley Lab and UC Berkeley:
Kathy Yelick, a professor of computer science at UC Berkeley with a joint appointment in the Computational Research Division, has been named the new head of BIPS. She also leads CRD's Future Technologies Group (FTG).
The Performance Evaluation Research Center (PERC), directed by David Bailey, is one of seven SciDAC Integrated Software Infrastructure Centers (ISICs). PERC involves approximately 25 researchers at eight centers (four labs and four universities). Its research focus includes:
- High-end scientific computer performance.
- Analysis and improvement of DOE/SC applications, especially SciDAC applications.
- Performance tools: enhancement and deployment.
- Performance modeling: projecting performance on future systems.
The Berkeley Benchmarking and Optimization Group (BeBOP) is led by Kathy Yelick and James Demmel of UC Berkeley, with substantial participation by Berkeley graduate and undergraduate students. Their research includes:
- Producing self-tuning code for sparse matrix kernels.
- Automatically adapting to cache, registers, compiler and matrix.
- Integrating applications with application-level libraries.
- Automatic tuning of high-level algorithms.
BeBOP works closely with the UCB LAPACK/ScaLAPACK project, which focuses on new algorithms for numerical linear algebra and new, more efficient implementations of linear algebra software.
Berkeley Lab's architecture evaluation research project, led by Leonid Oliker and Kathy Yelick, is conducted by staff from the Computational Research and NERSC divisions as well as collaborators from other institutions. They evaluate emerging architectures, such as processor-in- memory and stream processing, and develop adaptable “probes” to isolate performance-limiting features of architectures. They conducted the first in-depth analysis of state-of-the-art parallel vector architectures, running benchmark studies on the Japanese Earth Simulator System (ESS) and comparison runs on Cray's X1 system. Results on the ESS demonstrated 23 times faster performance than the IBM Power3 in a node-to-node comparison.
NERSC's benchmarking and performance optimization project is carried out by NERSC staff with expertise in performance analysis. They developed the Effective System Performance (ESP) benchmark to measure system-level efficiency and the Sustained System Performance (SSP) benchmark to measure overall system application throughput. SSP resulted in a 30% increase in the Seaborg system's capability and is now used in several non-DOE procurements. This team also accelerated several SciDAC application programs running on Seaborg.