Modern digital system design relies heavily on simulation to reduce the number of design errors and to improve system efficiency. In large system designs so much time is spent in simulation that it has become a design bottleneck. Event-driven simulation and levelized compiled simulation are two well-known simulation techniques that are currently used in digital system design.
In event-driven simulation, events are managed dynamically by an event scheduler. The main advantage of event-driven scheduling is flexibility; event-driven simulators can simulate both synchronous and asynchronous models with arbitrary timing delays. The disadvantage of event-driven simulation is low simulation performance.
Levelized compiled code logic simulators have the potential to provide much higher simulation performance than event-driven simulators because they eliminate much of the run-time overhead associated with ordering and propagating events . This is done by evaluating all components once each clock cycle in an order that ensures all inputs to a component have their latest value by the time the component is executed. The main disadvantage of levelized compiled simulation techniques is that they are not general. Most levelized compiled logic simulators cannot simulate models with arbitrary delays (RAVEL  is a notable exception). Furthermore, these techniques will not work on asynchronous models or models with unclocked feedback. In practice, even though most digital systems are synchronous, asynchronous chip interfaces are common.
In this paper we present a general method for compiling event-driven models called static simulation that combines the generality of event-driven simulations and the efficiency of the levelized simulation approach. Like event-driven simulation, our technique applies to all general models, including both synchronous and asynchronous designs. The only restriction is that any specified delays in the simulation must be known constants at compile time. For efficiency, our technique schedules the events at compile time, thus eliminating the need for a run-time event queue and its associated overhead. We replace the event queue with inexpensive run-time tests where necessary. For the models we have tested, these run-time tests incur significantly less overhead than a run-time event queue.
We represent the event-driven behavior with an event graph, whose vertices represent events in the simulation and whose edges represent the causal relationships between the events. We apply the general technique of partial evaluation to schedule the events as well as possible using statically available information. Specifically, the compiler tries to approximate the dynamic simulation process by keeping track of all the available static information that affects the contents of the run-time event queue in a dynamic simulation. This general method can be applied uniformly to all models, unlike previous approaches such as LECSIM , TORTLE  and .
To test our algorithm, we have implemented a prototype simulator, called VeriSUIF, using the SUIF (Stanford University Intermediate Format) compiler system . We chose Verilog mainly because it is a relatively simple language to implement. The VeriSUIF simulator is particularly useful for long-running regression tests because it produces a faster simulation than other techniques. However, our current implementation is unsuitable for other phases of the design process because it does not support interactive debugging.
The remainder of the paper is organized as follows. First we give a brief overview of Verilog and describe the features of Verilog that we support. Then we describe the event graph representation which underlies our method. Next we describe our mathematical model of traditional event-driven simulation and our static simulation technique. Finally, we discuss some optimizations, experimental results, and our conclusions.