next up previous
Next: Interprocedural Analysis Issues Up: Analysis of Array Previous: Array Data-Flow Analysis

Array Reduction Recognition.

Most compilers will recognize simple reductions such as the accumulation into the variable SUM in Figure 3(a). Reductions on array variables are also common in scientific codes and are a potential source of significant improvements in parallelization results.

Sparse array reductions. Sparse computations pose what is usually considered an insurmountable problem for parallelizing compilers. When arrays are part of subscript expressions, a compiler cannot determine the locations of the array being read or written. In some cases, loops containing sparse computations can still be parallelized if the computation is recognized as a reduction. For example, the loop in Figure 3(b) constitutes the main computation in the NAS sample benchmark cgm. We observe that the only accesses to sparse vector Y are commutative and associative updates to the same location. Thus, it is safe to transform this reduction to a parallelizable form. Figure 3(c) is an excerpt of a 148-line loop that makes up the main computation in the SPEC92 benchmark mdljdp2. The outer loop DO I ... is parallelizable if sparse reduction is performed interprocedurally. This example also demonstrates reductions in a loop may consist of multiple updates to the same array.



Saman Amarasinghe
Fri Sep 15 09:15:06 PDT 1995