Go to the next section.

Version 1.0

This manual is part of the SUIF compiler documentation set.

Copyright (C) 1994 Stanford University. All rights reserved.

Permission is given to use, copy, and modify this documentation for any non-commercial purpose as long as this copyright notice is not removed. All other uses, including redistribution in whole or in part, are forbidden without prior written permission.

Introduction

The SUIF parallelizing compiler translates sequential programs into parallel code for shared address space machines. The compiler generates a single-program, multiple-data (SPMD) program that contains calls to a portable run-time library. We currently have versions of the run-time library for SGI machines and the Stanford DASH multiprocessor. We also have a uniprocessor version of the library that's used for debugging and testing.

We tested the SUIF parallelizer on several benchmark suites, and compared its effectiveness against the KAP compiler, a commercial parallelizing compiler from Kuck and Associates, Incorporated. See section Performance Results. Our results indicate that the SUIF parallelizer provides a solid platform for experimentation with advanced compiler optimizations.

The SUIF parallelizer is currently being used to support a number of research projects. One such project is to develop an optimizing compiler for scalable shared address space machines. The scalable machine compiler includes analysis for finding data and computation decompositions that minimize communication while preserving parallelism. It also performs optimizations for reducing synchronization costs, and for restructuring arrays to enhance the performance of the memory hierarchy.

We are also working on incorporating interprocedural analysis techniques into the SUIF parallelizer. This includes interprocedural scalar data flow analysis and data dependence analysis, as well as array privatization and reduction recognition.

Go to the next section.