Overview - SUIF 2.0 - List of Optimizations - Schedule - People - Sponsors


The SUIF compiler infrastructure is a part of the national compiler infrastructure project funded by Darpa and NSF. It is a collaboration between researchers in four universities (Harvard University, Massachusetts Institute of Technology, Rice University, Stanford University, UC Santa Barbara), and one industrial partner, Portland Group Inc. The infrastructure is based on the SUIF (Stanford University Intermediate Format) parallelizing compiler developed at Stanford University.

The primary objective in the SUIF compiler design is to develop an extensible system that supports a wide range of current research topics including parallelization, object-oriented programming languages, scalar optimizations and machine-specific optimizations. We strive to develop an architecture that is modular, easy to extend and maintain, and supportive of software reuse. The compiler will be tested with standard benchmark programs. As it is not possible for us to create the ultimate system, our hope is to make the system extensible and attractive enough so that the community can band together to create an exciting system for research. Comments and contributions are welcome!

The sources of the SUIF system will be freely available for research and commercial use. The only exception is that the sources of the Digital Fortran front end need to be licensed from Digital.


Overview of the SUIF Compiler Infrastructure


Intermediate Format SUIF 2.0

A revised SUIF program representation (SUIF 2.0) will be released with this new system. The most notable feature in SUIF 2.0 is that the program representation will be user-extensible. This feature is motivated by the extension of the compiler to better support other programming languages and machine-level optimizations. Converters between SUIF 1.x and 2.0 representations will be provided to support backward compatability. More information will be forthcoming in the next few months.


List of Optimizations

Interprocedural Parallelization
Scalar and array data dependence analysis
Scalar and array privatization
Reduction recognition on scalar and array variables
Pointer analysis

Program transformations for parallelism and locality
Affine loop transformations
unimodular transform
loop distribution
loop fusion
loop reindexing
loop scaling
statement reordering
Blocking
Data transformations

Object-oriented programming languages
Capturing object definitions
Virtual function call elimination
Garbage collection

Machine-independent scalar analysis and optimizations
Static single assignment
Dead code elimination
Partial redundancy elimination
Conditional constant propagation
Global value numbering
Strength reduction
Reassociation of expressions

Machine-dependent optimizations
Instruction scheduling
Register allocation
Profiling and instrumentation tools

Schedule

Year 1
SUIF infrastructure base
Sequential Gcc and Fortran compilers
Prototype of object-oriented SUIF
Definition of machine-level SUIF

Year 2
Parallelization and loop transformations
x86 back end
Fortran, Java, C++ front end connections
Partial completion: scalar optimization, register allocation, instruction scheduling

Year 3
Complete system release
Interprocedural analysis


The SUIF Compiler Team

Institution
Principal Investigators
Subjects
Stanford University Monica Lam Base SUIF system
Interprocedural Parallelization
Pointer Analysis
UC Santa Barbara
MIT
Urs Holzle
Martin Rinard
Object-oriented programming languages
Rice University Keith Cooper
Linda Torczon
Scalar optimizations
Harvard University Mike Smith Machine-dependent optimizations
Portland Group Inc. Vince Schuster EDG front end connection
Validation, release, support


Sponsors

The SUIF compiler infrastructure is supported in part by DARPA, NSF, Advanced Micro Devices, Digital Equipment Corporation, Intel Corporation, and Sun Microsystems.


Be sure to visit our home page to learn more about the SUIF compiler system!


If you have any questions or comments, please contact the webmaster.

Last updated on January 12, 1998.