To evaluate our parallelization analysis, we measured its success at parallelizing three standard benchmark suites described by Table 1: the Fortran programs from SPEC92FP, the sample NAS benchmarks, and PERFECT.
SPEC92FP is a set of 14 floating-point programs used to benchmark uniprocessor architectures and compilers. We omit four in this study. Because the parallelization analysis currently is only available for Fortran, we omit alvinn and ear, the two C programs, and spice, a program of mixed Fortran and C code. We also omit fpppp because it contains type errors in the original Fortran source; this program is considered to contain very little loop-level parallelism. (The programs are presented in alphabetical order of their program names).
NAS is a suite of eight programs used for benchmarking parallel computers. NASA provides sample sequential programs plus application information, with the intention that they can be rewritten to suit different machines. We use all the NASA sample programs except for embar. We substitute for embar a version from APR that separates the first call to a function, which initializes static data, from the other calls.
Lastly, PERFECT is a set of originally sequential codes used to benchmark parallelizing compilers. We present results on 12 of 13 programs here. Spice contains pervasive type conflicts and parameter mismatches in the original Fortran source that violate the Fortran77 standard, and that the interprocedural analysis flags as errors. This program is considered to have very little loop-level parallelism.
Table 1: Benchmark programs.
The programs have been parallelized completely automatically by our system without relying on any user directives to assist in the parallelization. We have made no modifications to the original programs. All the programs produce valid results when executed in parallel.