Next: Introduction
Limits of Control Flow on Parallelism
Monica S. Lam and Robert P. Wilson
Computer Systems Laboratory
Stanford University, CA 94305
This research was supported in part by DARPA contract N00039-91-C-0138
and by an NSF graduate fellowship.
In Proceedings of the 19th Annual International Symposium on Computer
Architecture Gold Coast, Australia, May 19-21, 1992, pp. 46-57.
Copyright (C) 1992 by ACM, Inc.
Abstract:
This paper discusses three techniques useful in relaxing the constraints
imposed by control flow on parallelism: control dependence analysis,
executing multiple
flows of control simultaneously, and speculative execution. We evaluate these techniques
by using trace simulations to find the limits of parallelism for
machines that employ different combinations of these techniques.
We have three major results.
First, local regions of code have limited parallelism, and
control dependence analysis is useful in
extracting global parallelism from different parts of a program.
Second, a superscalar processor is fundamentally
limited because it cannot execute independent regions of code
concurrently. Higher performance can be obtained with machines,
such as multiprocessors and dataflow machines, that
can simultaneously follow multiple flows of control.
Finally, without speculative execution to allow instructions to execute
before their control dependences are resolved, only modest amounts of
parallelism can be obtained for programs with complex control flow.
Next: Introduction