Parallelizing compilers perform data dependence analysis
on arrays to check for loop-carried dependences on individual
array elements.
Array data dependence analysis is most effective when all subscript
expressions are * affine* functions of loop indices and loop
invariants. Within this domain, data dependence analysis
has been shown to be equivalent to integer programming.
While integer programming is potentially expensive,
the data dependence problems found in practice are simple,
and efficient algorithms have been developed that
usually solve these problems exactly [7,20].
For this reason, parallelizing compilers
incorporate a host of scalar symbolic analyses to put
array indices in affine form,
including constant propagation, value numbering, and
induction variable recognition. These analyses
provide integer coefficients for subscript variables
and derive affine equality relationships among variables.
Some systems also propagate
inequality relations and other
relational constraints on integer variables imposed by surrounding
code constructs (` IF`s and loops) to their uses in array
subscripts [12,15].

* Relations on Non-Linear Variables.*
Propagating only affine relations among scalars is not sufficient
to parallelize some loops. For example, scientific codes
often linearize accesses to (conceptually) multidimensional arrays,
resulting in subscript expressions that cannot be expressed
as an affine function of the enclosing loop indices.
The loop nest in Figure 1(a), from the
Perfect benchmark * trfd*, illustrates such a situation.
Figure 1(b) shows information that can be
determined by a symbolic analysis of the loop.
The access to array ` XRSIJ` does not induce a
dependence between iterations of the outer loop, since the index `
MRSIJ` never has the same value on two different iterations.

**Figure 1:** Non-linear analysis example.

Fri Sep 15 09:15:06 PDT 1995