TUTORIAL No. 3
High Level Design Validation:
Current Practices &
Future Trends
17th International Conference on VLSI Design
Mumbai, India
Januray 5th, 2004
Authors

Prof. Masahiro Fujita

Member of Research Staff
Fujitsu Labs. of America
Sunnyvale, CA, USA
Email: ighosh@fla.fujitsu.com
Professor
Dept. of Electronic Engineering
University of Tokyo, Japan
Email: fujita@ee.t.u-tokyo.ac.jp

Member of Research Staff
Fujitsu Labs. of America
Sunnyvale, CA, USA
Email: mukul@fla.fujitsu.com

Dr. Rajarshi Mukherjee
Member of Technical Staff
Calypto Design Automation
Santa Clara, CA, USA
Email: rajarshim@yahoo.com
Outline & Introduction
MASAHIRO FUJITA
University of Tokyo
Tokyo, Japan
fujita@ee.t.u-tokyo.ac.jp
Tutorial Outline

Introduction

High level design flow and verification issues

Theorem proving and Decision procedures

Formal verification techniques for FSM models

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
Necessity of high level verification


Over 70% of system LSI design time is for “verification”
Over 50% reason of “re-spin” of ASIC designs is functional
design errors
 Need to find as many bugs and as early as possible
 Once “golden models” are created, never introduce
design errors
Remaining bugs
Time
Verification methods:Simulation/Emulation

Simulation



Applicable at any design levels
Difficulties in preparing “good” simulation
patterns
 Getting harder to cover corner cases

Emulation



Can handle Software simultaneously
Cost and time for preparation: not ignorable
Corner case problem
Formal Verification

“Prove” the correctness of designs




Possible mathematical models




Both design and spec must be represented with
mathematical models
Mathematical reasoning
Equivalence to “all cases” simulations
Boolean function (Propositional logic)
 How to represent and manipulate on computers
First-order logic
 Need to represent “high level” designs
Higher-order logic
 Theorem proving = Interactive method
Front-end is very important

In many cases, it determines the total performance
Spec
Design
Front-end
tool
Mathematical
models
Verification
engines
Formal vs Simulation based Verification

Example:

Verification of Exclusive-OR circuits:
 z = ~x&y + x&~y ( ‘~’ means compliment)
b
x
G2
a
G4
G1
y
G3
c
z
Example: Formal vs Simulation (Cont.)

Verification of Exclusive-OR circuit with
“simulation”
z and ~x&y+x&~y are equal for all cases
 Need to simulate 2**N cases

x
y
0
0
1
1
0
1
0
1
z ~x&y+x&~y
0
1
1
0
0
1
1
0
b
x
a
y
z
c
Example: Formal vs Simulation (Cont.)

Verification of Exclusive-OR circuit with
“formal verification”
z = ~b + ~c
b
x
b = ~x + ~a
a
c = ~a + ~y
a = ~x + ~y
z = ~b + ~c
y
c
= ~(~x + ~a) + ~(~a + ~y)
= a&x + a&y
= (~x + ~y)&x + (~x + ~y)&y
= x&~y + ~x&y
 Axiomatic and Mathematical transformation of
expressions to reach the specification
z
Design Representation in High level

Super state: Need multiple cycles for
executions
Op1
PS1
Op2
PS2
SFSMD model
Op4
a = 42;
while (a<100)
{ b = b + a;
if (b > 50)
c = c + d;
a = a + c;
}
PS3
Op1
S1
Op3
Op2
Op5
Op6
Op3
S2
Op4
S3
Op5
FSMD model
Op6
Establish
mapping
Control
circuits
Datapath
circuits
Fusing Simulation and Formal Verification:
Semi-formal Verification Techniques

Not so easy...



Simulation patterns cannot be used
for formal verification (meaningless
if used)
Properties cannot be “simulated” as
it is
Anyway, write down properties...



Assertion-based verification
Switch between formal and
simulation-based dynamically
…
Property
Simulation
Patterns
Tutorial Outline

Introduction

High level design flow and verification issues

Theorem proving and decision procedures

Formal verification techniques for FSM models

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
High-Level Design Flow &
Verification Issues
MASAHIRO FUJITA
University of Tokyo
Tokyo, Japan
fujita@ee.t.u-tokyo.ac.jp
Section Outline

System LSI design

From requirement to specification/functional
descriptions

From functional description to implementation

Verification problems in the design
methodology

Verification techniques for high level designs
System LSI Design

System-on-Chip (SOC) design
 Increase of design complexity
Level
System
Number of components
1E0
1E2
1E3
RTL
Gate
1E4
1E5
1E6
Transistor
1E7
Accuracy
Algorithm
Abstraction
1E1
Introduction

System-on-Chip (SOC) design
 Increase of design complexity
 Move to higher levels of abstraction
Level
Number of components
1E0
System level
1E2
1E3
RTL
Gate
1E4
1E5
1E6
Transistor
1E7
Accuracy
Algorithm
Abstraction
1E1
System-On-Chip Design


Specification to architecture to
implementation
Behavior to structure


System level: system specification to system
architecture
RT/IS level: component behavior to component
micro-architecture
µProcessor
Control
IF FSM
IP Netlist
IP
Comp.
Memory
Pipeline
RAM
State
PC
Interface
Interface
IR
Control
Bus
Interface
Processors
IPs
Memories
Busses
Interface
Memory
Registers
ALUs/FUs
Memories
Gates
State
Datapath
Mem
RF
IF FSM
Memory
State
ALU
Custom HW
Specification
+ constraints
System architecture
+ estimates
RTL/IS Implementation
+ results
Section Outline

System LSI design

From requirement to specification/functional
descriptions

From functional description to implementation

Verification problems in the design
methodology

Verification techniques for high level designs
From Requirement to
Specification/functional Descriptions

What is UML?

UML is a language for:
Specifying, Visualizing, Constructing, and Documenting
Software Artifacts

What does a modeling language provide?



Model elements: Concepts and semantics
Notation: Visual rendering of model elements
Guidelines: Hints and suggestions for using
elements in notation
Representing System Architecture
Logical View
Implementation View
End-user
Functionality
Programmers
Software management
Use Case View
Process View
Deployment View
System integrators
Performance
Scalability
Throughput
Conceptual
System engineering
System topology
Delivery, installation
Communication
Physical
The Goals of UML






language that promotes
development/Exchange
Extensibility/specialization of core concepts
Independent of programming languages
and development processes
Formal basis for understanding language
Encourage growth of OO tools market
Support higher level design concepts


Collaborations, frameworks, patterns, etc.
Integrate the best practices of all OOD
UML Modeling Constructs/diagrams:
Static vs. Dynamic Perspectives

A diagram is a view into a model




Presented from the aspect of a particular
stakeholder
Provides a partial representation of the system
Is semantically consistent with other views
In UML, there are nine standard diagrams


Static Views: Use Case, Class, Object,
Component, Deployment
Dynamic Views: Sequence, Collaboration,
Statechart, Activity
UML Modeling Constructs/diagrams:
Classification by Capability/timeline

Use-Case Diagrams
 Class and Object Diagrams
 Behavior Diagrams



Interaction Diagrams



Statechart Diagrams
Activity Diagrams
Sequence Diagram
Collaboration Diagram
Implementation Diagrams


Component Diagram
Deployment Diagram
Relationship of Models and Diagrams
Use Case
Use Case
Diagrams
Sequence
Diagrams
Diagrams
Scenario
Scenario
Diagrams
Collaboration
Diagrams
Diagrams
Scenario
Scenario
Diagrams
Statechart
Diagrams
Diagrams
Use Case
Use Case
Diagrams
Use Case
Diagrams
Diagrams
State
State
Diagrams
Class
Diagrams
Diagrams
State
State
Diagrams
Object
Diagrams
Diagrams
State
State
Diagrams
Component
Diagrams
Diagrams
Models
Component
Component
Diagrams
Deployment
Diagrams
Activity
Diagrams
Diagrams
Description Example

Host interface for compact flash memory

Interface between NAND type flash memory and
host PC

Use data buffer for asynchronous data
transfer and protocols
 Use Case diagram
Data transfers
with protocol change
Host PC
Design target
Confirm and change
various status
NAND
flash
memory
Interface between NAND Flash
Memory and the Design Target

Use Case diagram
Confirm
/change
status
NAND
flash
memory
Buffer
Data
CPU
Activity Diagram
Wait
Wait
ATA command/Interrupt
Interpret
command
NAND status
Busy
Send status
to CPU
Data transfer
b/w Host and
the design target
Send status
to CPU
Sequence Diagram
Data
buffer
Synchronous IF
CPU
NAND
flash
memory
Get access right
Request data
transfer
Data transfer
Start transfer
Data transfer
Finish transfer
Finish transfer
Busy
Section Outline

System LSI design

From requirement to specification/functional
descriptions

From functional description to implementation

Verification problems in the design
methodology

Verification techniques for high level designs
Abstraction Levels
Structure /
Implementation detail
Order /
Timing detail
Requirements
Constraints
Application domain MOCs (Matlab, SDF, etc.)
Functional
Specification
Untimed
(causality)
Structural
Architecture
Timed
(estimated)
Busfunctional
Communication
Timingaccurate
RTL/IS
Implementation
Cycleaccurate
Manufacturing
Gate
netlist
Many small steps
of incremental refinement
Gate
delays
Design Flow
Structure /
Implementation detail
Order /
Timing detail
System design
Capture
Functional
Algor.
IP
Untimed
(causality)
Specification model
Architecture exploration
Comp.
IP
Timed
(estimated)
Architecture model
Structural
Communication synthesis
Busfunctional
Proto.
IP
Timingaccurate
Communication model
RTL
IP
RTL/IS
Hardware
synthesis
Interface
Software
synthesis compilation
Implementation model
Backend
RTOS
IP
Cycleaccurate
SpecC Methodology
System design
Validation flow
Capture
Algor.
IP
Compilation
Specification model
Architecture exploration
Validation
Analysis
Estimation
Comp.
IP
Compilation
Architecture model
Communication synthesis
Proto.
IP
Compilation
Hardware
synthesis
Interface
Software
synthesis compilation
Backend
Simulation model
Validation
Analysis
Estimation
RTOS
IP
Compilation
Implementation model
Simulation model
Validation
Analysis
Estimation
Communication model
RTL
IP
Simulation model
Validation
Analysis
Estimation
Simulation model
Specification Model

High-level, abstract model




No implicit structure /
architecture


Pure system functionality
Algorithmic behavior
No implementation details
Behavioral hierarchy
Untimed



Executes in zero (logical) time
Causal ordering
Events only for synchronization
Specification
model
Architecture exploration
Architecture model
Communication synthesis
Communication
model
Backend
Implementation
model
Specification Model Example

Simple, typical specification model


Hierarchical parallel-serial composition
Communication through ports and variables, events
B1
B1
v1
B2
v2
e2
B3
Architecture Exploration
Specification model

Component allocation /
selection
Architecture exploration
Architecture model

Behavior partitioning
Communication synthesis
Communication model

Variable partitioning
Backend
Implementation model

Scheduling
Allocation, Behavior Partitioning
B1
B1
PE1
Allocate PEs

Partition
behaviors

Globalize
communication
PE2
v1
B2

B3
c2
 Additional level of hierarchy to model PE structure
Architecture Model Example
PE1
PE2
B1
B1
v1
B13snd
cb13
B13rcv
v1
B2
B3
c2
B34rcv
cb34
B34snd
Architecture Model

Component structure/architecture

Top level of behavior hierarchy
Specification model
Architecture exploration

Behavioral/functional component
Architecture model
view
Communication synthesis



Behaviors grouped under top-level
component behaviors
Sequential behavior execution
Timed

Estimated execution delays
Communication model
Backend
Implementation model
Communication Synthesis

Bus allocation / protocol
selection
Specification model
Architecture exploration

Channel partitioning
Architecture model
Communication synthesis

Protocol, transducer insertion
Communication model
Backend

Inlining
Implementation model
Bus Allocation / Channel Partitioning
PE1
PE2
B1
B1

Allocate
busses

Partition
channels

Update
communication
Bus1
v1
B13snd
cb13
B13rcv
v1
B2
B3
c2
B34rcv
cb34
B34snd
 Additional level of hierarchy to model bus structure
Model after Channel Partitioning
PE1
PE2
B1
B1
Bus1
v1
cb13
B13snd
c2
B13rcv
v1
cb34
B2
B3
B34rcv
B34snd
Communication Model Example
PE2
PE1
B1
B1
v1
B13snd
data[31:0]
B13rcv
v1
ack
B2
B3
B34rcv
B34snd
Communication Model

Component & bus
structure/architecture


Top level of hierarchy
Bus-functional component
Specification model
Architecture exploration
Architecture model
Communication synthesis
models
Communication model



Timing-accurate bus protocols
Behavioral component
description
Timed

Estimated component delays
Backend
Implementation model
Backend

Clock-accurate
implementation of PEs

Hardware synthesis
Specification model
Architecture exploration
Architecture model
Communication synthesis

Software synthesis

Interface synthesis
Communication model
Backend
Implementation model
Hardware Synthesis

Schedule operations into clock cycles


Define clock boundaries in leaf behavior C
code
Create FSMD model from scheduled C code
Clock boundaries
PE2
B13rcv
PE2_CLK
v1
PE2_CLK
B3
PE2_CLK
B34snd
Software Synthesis

Implement behavior on processor
instruction-set

Code generation

Compilation
PE1
B1
B1
v1
Ff2
MOVE
r0, r1
SHL
INC
r3
r2, r3, r4
r2
PUSH
CALL
POP
r1
Ff3
r0
B13snd
B2
B34rcv
Interface Synthesis
Implement communication on components

Hardware bus interface logic

Software bus drivers
S4
IBusSlave
data[31:0]
ack
IProtocolSlave
S3
data[31:0]
ack
PE2Bus
PE2Protocol
S2
PE1Protocol
S1
IProtocolMaster
PE1Bus
S0
IBusMaster

DRV
Implementation Model Example
Software processor
Custom hardware
PE1
PE2
OBJ
Instruction
Set
Simulator
(ISS)
PORTA
PORTB
data[31:0]
PORTC
INTA
ack
S0
S1
S2
S3
S4
PE1_CLK
PE2_CLK
Implementation Model

Cycle-accurate system description
Specification model

RTL description of hardware
 Behavioral/structural FSMD view

Object code for processors
 Instruction-set co-simulation

Clocked bus communication
 Bus interface timing based on PE clock
Architecture exploration
Architecture model
Communication synthesis
Communication model
Backend
Implementation model
Support: 1. Modeling Engine
Validation
Compile
Capture
Specification model
Simulate
Arch. refinement
Architecture model
Simulate
Comm. refinement
Communication model
Simulate
Impl. refinement
Implementation model
Simulate
Support: 2. Refinement Engine
Refinement
Validation
Alg. selection
Compile
Browsing
Capture
Spec. optimization
Specification
model
Simulate
Allocation
Beh. partitioning
Arch. refinement
Scheduling / RTOS
Architecture model
Simulate
Protocol selection
Channel partitioning
Comm. refinement
Arbitration
Communication model
Simulate
Cycle scheduling
Protocol scheduling
Impl. refinement
SW assembly
Implementation model
Simulate
Support: 3. Exploration Engine
Refinement
Validation
Alg. selection
Compile
Browsing
Capture
Spec. optimization
Profiling
weights
Profile
Profiling
Specification
model
Profiling data
Simulate
Allocation
Beh. partitioning
Arch. refinement
Scheduling / RTOS
Comp. / IP
models
Estimate
Estimation
Architecture model
Simulate
Estimation results
Protocol selection
Channel partitioning
Comm. refinement
Arbitration
Protocol
models
Estimate
Estimation
Communication model
Simulate
Estimation results
Cycle scheduling
Protocol scheduling
Impl. refinement
SW assembly
Implementation model
Simulate
Support: 4. Synthesis Engine
Refinement
Validation
Alg. selection
Compile
Browsing
Capture
Profile
Spec. optimization
Profiling
weights
Profiling
Specification
model
Profiling data
Allocation
Comp. / IP
attributes
Synthesize
decisions
Arch. refinement
Estimate
Scheduling / RTOS
Comp. / IP
models
Estimation
Architecture model
Estimation results
Protocol selection
Protocol
attributes
Verify
Synthesize
decisions
Comm. refinement
Estimate
Arbitration
Protocol
models
Estimation
Communication model
Estimation results
Protocol scheduling
Simulate
Comm. synthesis
Design
Channel partitioning
Cycle scheduling
Verify
Arch. synthesis
Design
Beh. partitioning
Simulate
RTL
comp.
Simulate
Verify
Impl. synthesis
Design
decisions
Synthesize
Impl. refinement
SW assembly
Simulate
Implementation model
Verify
Tool example: SoC Environment: SCE
from UCI
Refinement
Alg. selection
Validation
Compile
Browsing
Capture
Spec. optimization
Profiling
weights
Profile
Profiling
Specification
model
Profiling data
Simulate
Allocation
Beh. partitioning
Arch. refinement
Scheduling / RTOS
Comp. / IP
models
Estimate
Estimation
Architecture model
Simulate
Estimation results
Protocol selection
Channel partitioning
Comm. refinement
Arbitration
Protocol
models
Estimate
Estimation
Communication model
Simulate
Estimation results
Cycle scheduling
Protocol scheduling
Impl. refinement
SW assembly
Implementation model
Simulate
Section Outline

System LSI design

From requirement to specification/functional
descriptions

From functional description to implementation

Verification problems in the design
methodology

Verification techniques for high level designs
Formal verification in SpecC based
design methodology



C based design
description from
specification (functional)
level down to RTL
Incremental refinement on
the SpecC/C descriptions
Equivalence checking
between refinements



Between sequential and
parallel descriptions
Between two same control
structures
Property checking on each
refinement
E q u iv a le n ce ch e ck in g b e tw e e n tw o d e scrip tio n s
M o d e l ch e ck in g o n e a ch d e scrip tio n
R e fin e d d e scrip tio n
fo r h a rd w a re p a rts
S o ftw a re p a rts
re m a in th e sa m e
S p e cifica tio n
in C
R e m o va l o f p o in te r,
re cu rsive ca llin g
R e fin e d d e scrip tio n
w ith co n cu rre n cy
In tro d u ctio n o f
co n cu rre n cy
(S p e cC o r S yste m C
m a y b e u se d h e re )
: R e fin e m en t step
F ig u re 1 . C -b a sed sy stem d esig n flo w
To RTL
d e sig n
Execution semantics in SpecC

Sequentiality rule:
Tstart(B1) <= Tstart(a) < Tend(a) <=
Tstart(b) < Tend(b) <=
Tstart(c) < Tend(c) <= Tend(B1)
 Sequential execution within a thread

Example
behavior B1
{ void main(void)
{ a.main();
b.main();
c.main();
}
};
B1
a
b
c
time
Execution semantics in SpecC (cont.)

Sequentiality rule:
Tstart(B) <= Tstart(a) < Tend(a) <=
Tstart(b) < Tend(b) <=
Tstart(c) < Tend(c) <= Tend(B)
Tstart(B) <= Tstart(d) < Tend(d) <=
Tstart(e) < Tend(e) <=
Tstart(f) < Tend(f) <= Tend(B)

Example: concurrency
behavior B
{ void main(void)
{ par { b1.main();
b2.main();}
}};
behavior B1
{ void main(void)
{ a.main();
b.main();
c.main();} };
behavior B2
{ void main(void)
{ d.main();
e.main();
f.main();} };
Possible Schedule
B
d
e
a
f
b
c
time
Execution semantics in SpecC (cont.)

Waitfor rule:
0 <= Tstart(a) < Tend(a) < 1
0 <= Tstart(w) < Tend(w) = 10
10 <= Tstart(b) < Tend(b) < 11

Time-interval formalism




Simulation time
Only waitfor increases simulation time
Other statements execute in zero simulation time
Example
behavior B
{ void main(void)
{ a.main();
waitfor 10;
b.main();
}
};
a
t=0
w
t=1
b
t = 10
t = 11
time
Synchronization Semantics

wait/notify rule in time-interval formalism:


wait cannot proceed until it receives a notified
event
notified event is valid
until a wait or waitfor statement is reached
behavior A: a.main(); wait e1; b.main();
behavior B: c.main(); notify e1; d.main();

Example:

Notify-wait rule derives: Tend(w) >= Tend(n)
 wait cannot proceed until it receives notified event
‘wait e1’ starts
A
a
‘wait e1’ succeeds
w
b
this order is guaranteed!
B
c
n
‘notify e1’ done
d
time
Incremental Refinement Steps

Starting with a functional model, make each part more
detailed

A1, B1, and C1 are refinement of A, B, and C respectively
void A() {
A
}
void B() {
void A1() {
}
void B1() {
B
}
void C() {
}
void main() {
A();
B();
C();
}
A1
B1
}
void C1() {
C
}
void main() {
A1();
B1();
C1();
}
C1
Incremental Refinement Steps

Change of control structures


IF-THEN-ELSE
Sequential to parallel
void A1() {
A1
void A1() {
A1
}
void B1() {
}
void B1() {
B1
}
void C1() {
}
void main() {
A1();
B1();
C1();
}
}
void C1() {
C1
}
void main() {
par{
A1.main();
B1.main();
}
C1.main();
}
B1
notify
wait
C1
Incremental Refinement Steps

Again make each part more detailed
 A2, B2, and C2 are refinement of A1, B1, and C1
respectively
void A1() {
void A1() {
A1
}
void B1() {
}
void C1() {
}
void main() {
par{
A1.main();
B1.main();
}
C1.main();
}
B1
notify
wait
C1
A2
B2
}
void B1() {
notify
}
void C1() {
}
void main() {
par{
A1.main();
B1.main();
}
C1.main();
}
wait
C2
Equivalence Checking Comparison of
two Descriptions with Same Control

If A and A1, B and B1, C and C1 are equivalent, then
entire descriptions are equivalent

Comparison of two “similar” descriptions
void A() {
void A1() {
A
}
void B() {
}
void C() {
}
void main() {
par{
A.main();
B.main();
}
C.main();
}
B
notify
wait
C
A1
}
void B1() {
}
void C1() {
}
void main() {
par{
A1.main();
B1.main();
}
C1.main();
}
B1
notify
wait
C1
Equivalence Checking between
Sequential and Parallel Versions

Based on synchronization verification, prove
equivalence

Extraction of synchronization and dependency analysis
void A() {
A
void A() {
A
}
void B() {
}
void B() {
B
notify
wait
B
}
void C() {
}
void main() {
A();
B();
C();
}
}
void C() {
C
}
void main() {
par{
A.main();
B.main();
}
C.main();
}
C
notify-wait pair
are supposed to
communicate !
Various SpecC Description Refinements

Starting from pure functional ones, refine them into
more HW/SW oriented ones
void A() {
A
}
void B() {
void A1() {
}
void B1() {
B
}
void C() {
}
void main() {
A();
B();
C();
}
A1
B1
}
void C1() {
C
}
void main() {
A1();
B1();
C1();
}
C1
SpecC Description Refinements (cont.)

From sequential ones to parallel ones
void A1() {
A1
void A1() {
A1
}
void B1() {
B1
}
void B1() {
B1
}
void C1() {
}
void main() {
A1();
B1();
C1();
}
}
void C1() {
C1
}
void main() {
par{
A1.main();
B1.main();
}
C1.main();
}
C1
SpecC Description Refinements (cont.)

More refinement on each block of descriptions
void A1() {
void A1() {
A1
B1
}
void B1() {
}
void C1() {
}
void main() {
par{
A1.main();
B1.main();
}
C1.main();
}
A2
B2
}
void B1() {
C1
}
void C1() {
}
void main() {
par{
A1.main();
B1.main();
}
C1.main();
}
C2
Equivalence Checking between
Same Control Structures

The two below are equivalence, if A and A1, B and B1, C
and C1 are equivalent

Equivalence checking between two very similar descriptions
void A() {
void A1() {
A
B
}
void B() {
}
void C() {
}
void main() {
par{
A.main();
B.main();
}
C.main();
}
A1
B1
}
void B1() {
C
}
void C1() {
}
void main() {
par{
A1.main();
B1.main();
}
C1.main();
}
C1
Equivalence Checking between
Sequential and Parallel Structures

For example, if A and B are totally independent
computations, they are equivalent

Extract control structure and check the final computation is
equal
void A() {
A
void A() {
A
}
void B() {
B
}
void B() {
B
}
void C() {
}
void main() {
A();
B();
C();
}
}
void C() {
C
C
}
void main() {
par{
A.main();
B.main();
}
C.main();
}
Verification of
Synchronization
Synchronization Verification:
Par Statement in SpecC
C Language
void A() {
A
SpecC Language
void A() {
A
}
void B() {
B
}
void B() {
B
}
void C() {
}
void main() {
A();
B();
C();
}
}
void C() {
C
}
void main() {
par{
A.main();
B.main();
}
C.main();
}
C
Synchronization in SpecC

Add notify/wait of event e for sync.

‘wait’ will stop process until it is ‘notify’
void A() {
A
notify e1;
wait e2;
B
Process B stops
and wait until
e1 is notified
}
void B() {
wait e1;
notify e2;
}
void main() {
par {
A.main();
B.main();
}
}
notify e1
Process A stops
and wait until
e2 is notified
A resumes
wait e2
wait e1
notify e2
B resumes
Synchronization in SpecC (cont.)
a . mSynchronization
ain()
by
m
maaiinn(()) {{
ppaarr{{ aa..mm
aa
ii
n(
n )(;) ;
b.main(); }}
St2
a.main()
b.main(); }}
behavior a {
b e hmaaviino(r) a{ {x = 1 0 ;
m a i n ( ) { yx
= +1100;;
=x
= ixf+y1 0e;;
ny
ot
Notify/wait
St1
St1
St2
b.main()
/*st1*/
t /1 * /
/ */
s*
ts
2*
2 *}/ } }
/ */N*
es
wt
* /}
b . m aSitn3( )
St3
tim e
tim e
b
beehhaavviioorr bb {{
mmaaiinn(()) {{ w x
ai
= t2 0e;;
x=20;
Tas T1s
/ */N*
es
wt
*/
3*/ }}
/*st3*/ }}
Tas T1s
Tbs
Tbs
Ambiguous results on y causing from
x = 10; /*st1*/
x = 20; /*st3*/
T1e T2s
T1e T2s
T3s
T2e Tae
T2e
T3e
T3s
y = 20 (always)
Tae
Tbe
T3e Tbe
Synchronization in SpecC (cont.)
Synchronization by
Notify/wait
main() {
par{ a.main();
b.main(); }}
a.main()
St1
behavior a {
main() { x=10;
y=x+10;
notify e;
/*st1*/
/*st2*/
/ * N e w * /} }
St2
b.main()
St3
tim e
behavior b {
main() { wait e;
x=20;
/*New*/
/*st3*/ }}
Tas T1s
T1e T2s
Tbs

Tas=Tbs, Tae=Tbe
 Tas<=T1s<T1e<=T2s<T2e<=Tas
 Tbs<=T3s<T3e<=Tbe
 T2e<=T3s
y = 20 (always)
T2e
T3s
Tae
T3e Tbe
Section Outline

System LSI design

From requirement to specification/functional
descriptions

From functional description to implementation

Verification problems in the design
methodology

Verification techniques for high level designs
Basic Method: Symbolic Simulation
* Further discussed in later Section

Example of checking the behavioral
consistency based on symbolic simulation

Equivalent variables are collected into EqvClass
a = v1;
b = v2;
Description 1
Description 2
Symbolic simulation
EqvClass
We are going to check the equivalence
Symbolic Simulation Example

This is an example of equivalence checking
based on symbolic simulation

Equivalent variables are collected into EqvClass
a = v1;
b = v2;
Description 1
Symbolic simulation
E1 (a, v1)
E2 (b, v2)
EqvClass
Description 2
Description1 is simulated
Symbolic Simulation Example

This is an example of equivalence checking
based on symbolic simulation

Equivalent variables are collected into EqvClass
a = v1;
b = v2;
Description 1
Symbolic simulation
E1
E2
E3
E4
(a, v1)
(b, v2)
EqvClass
Description 2
Description2 is simulated
Symbolic Simulation Example

This is an example of equivalence checking
based on symbolic simulation

Equivalent variables are collected into EqvClass
a = v1;
b = v2;
Description 1
Description 2
Symbolic simulation
E1
E2
E3
E4
(a, v1)
(b, v2)
EqvClass
Due to the equivalences
in E1, E2
Symbolic Simulation Example

This is an example of equivalence checking
based on symbolic simulation

Equivalent variables are collected into EqvClass
a = v1;
b = v2;
Description 1
Description 2
Symbolic simulation
E1 (a, v1)
E2 (b, v2)
EqvClass
E3 & E4 are merged
into E3’
Program Slicing for C/SpecC

The codes to be symbolically simulated are
extracted by program slicing




This means only extracted codes will be simulated for
verification
Program slicing can extract the codes that can
affect (be affected by) a variable
Two kinds of slicing: backward slicing and forward
slicing
Based on C program slicer, program slicer for
SpecC is developed based on VHDL program
slicer [Shankar]

IF the two descriptions have the same control structures,
a little bit of extension is enough
Backward Slicing

Backward slicing for a variable v extracts
all codes that affect the variable v
Backward slicing
a = 2;
a = 2;
b = 3;
b = 3;
c = 5;
c = 5;
a = a + 10;
a = a + 10;
b = a * c; /start/
b = a * c; /start/
c = c + a;
c = c + a;
a = a * b;
a = a * b;
Forward Slicing

Forward slicing for a variable v extracts all
codes that are affected by the variable v
Forward slicing
a = 2;
a = 2;
b = 3;
b = 3;
c = 5;
c = 5;
a = a + 10;
a = a + 10;
b = a * c; /start/
b = a * c; /start/
c = c + a;
c = c + a;
a = a * b;
a = a * b;
Efficient Equivalence Checking

Comparison of two similar design descriptions



Extract textual differences
Dependency analysis by program slicing type
techniques
Symbolically simulate the real difference and analyze
the results
: textual
differences
return a;
Description1
return a;
Description2
return a;
Description1
return a;
Description2
Verification Flow (1)
Description 1
Description 2
Pre-processes
Identification of textual
differences &
ordering them
Output the set of textual
differences (d1, d2, d3, …)
Identification of Textual Differences


First, textual differences are identified by “diff”
Then, they are sorted in the order of execution
int v1, v2, out, opcode;
v1 = 3;
v2 = 5;
if(opcode == 1) {
out = v1 + v2;
}
Description 1
d1
d2
d3
int v1, v2, out, opcode;
int reg1, reg2, alu;
v1 = 3;
v2 = 5;
reg1 = v1;
reg2 = v2;
if(opcode == 1) {
alu = reg1 + reg2;
out = alu;
}
Description 2
Verification Flow (2)
(d1, d2, d3, …)
Is there any
differences left?
No
Yes
Verification
terminates
successfully
Decision of target variables
Backward slicing
Consistency Symbolic simulation
is proved
Consistency is not proved
Forward slicing
An erroneous
Symbolic simulation
Consistency
Consistency trace is reported
is proved
is not proved
Verification Flow (2)
(d1, d2, d3, …)
Is there any
differences left?
No
Yes
Verification
terminates
successfully
Decision of target variables
Backward slicing
Consistency Symbolic simulation
is proved
Consistency is not proved
Forward slicing
An erroneous
Symbolic simulation
Consistency
Consistency trace is reported
is proved
is not proved
Decision of Target Variables

A variable v in a difference d is a target variable,

When the variable v is defined in both descriptions, and
assigned in the difference d
int v1, v2, out, opcode;
v1 = 3;
v2 = 5;
if(opcode == 1) {
out = v1 + v2;
}
Description 1
d1
d2
d3
int v1, v2, out, opcode;
int reg1, reg2, alu;
v1 = 3;
v2 = 5;
reg1 = v1;
reg2 = v2;
if(opcode == 1) {
alu = reg1 + reg2;
out = alu;
}
Description 2
Case Split
(d1, d2, d3, …)
Is there any
differences left?
No
Yes
Verification
terminates
successfully
Decision of target variables
Backward slicing
Consistency Symbolic simulation
is proved
Consistency is not proved
Forward slicing
An erroneous
Symbolic simulation
Consistency
Consistency trace is reported
is proved
is not proved
Symbolic Simulation/ Bounded
Model Checking

Reduced to Boolean SAT problems (OBDD
or SAT)
x0 … xn
Expand
k times
…
…
ym
y m
Cycle 1
Cycle 2
0
y0
0
ym
ym
Mem.
1
1
k
k
x0 … xn
Cycle k
…
…
Convert
Property
(reset && X(reset) && XX(reset) &&c)
⇒（XX(out)=in)
Checker Circuit
k
y0
…
y0
…
y 0
1
1
x0 … xn
…
y0
0
0
x0 … xn
k
ym
Is there an
input
pattern
generating
“one” at the
output
Approaches for Verifying High-level Designs

Propositional logic ⇒ First-order logic, ...


e.g. Abstraction of arithmetic op’s by symbols(<, +, *,...)
Approaches



SAT for Logic of uninterpreted functions with equality
 Equivalence checking without considering meaning
of functions
Hybrid SAT
 SAT for Boolean formulas with linear constraints
Tool
 SVC(Stanford Validity Checker) tool
– Boolean logic, linear programming, uninterpreted
function,. arrays..
Logic of Uninterpreted Functions with
Equality

Arithmetic operations are handled as symbols
x
y

1
f
0
Y
Y:= if c then x else f(x,y)
c
Equivalence checking :“Design 1 = Design 2 ?”


There is a decision procedure.
Can be checked by SVC tool.
Abstraction
* Further discussed in later Section

Like to handle very
large descriptions
 Reduction of the
number of states is a
must

A set of behaviors
of the abstracted design
Merging multiple states
into a single state
 Abstracted designs
contain more behaviors
than the originals
 False-negative results
possible, but never falsepositive
A set of behaviors
of the original design
Abstraction

If some property holds,


then the original design is correct
Otherwise,
 the design is incorrect, or the
abstraction is too strong
 Refinement is necessary to
prove the property
Behaviors allowed
by a property
“The property holds”.
incorrect
Indistinguishable
from the verification
result for the
abstracted design
abstraction is too strong
“The property does not hold”.
Model Checking of C programs

Basic techniques

abstraction, model checking, and error analysis +
refinement
Abstracti
on
Ｃ Program
Properties
Modified
Program
Abstracted
Boolean
Program
Model
New
constraints
Checking
Error
analysis
Microsoft’s SLAM tool
Bugs
Counterexample
Correct
Tutorial Outline

Introduction

High level design flow and verification issues

Theorem proving and Decision procedures

Formal verification techniques for FSM models

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
Theorem Proving & Decision
Procedures
MASAHIRO FUJITA
University of Tokyo
Tokyo, Japan
fujita@ee.t.u-tokyo.ac.jp
Symbolic Logic

Symbolic logic deals with the structure of
reasoning
 It defines ways to deal with relationships
between concepts
 Provides ways to compose proofs of
statements
 Provides ways to express statements in a
precise, compact and symbolic manner
Symbolic Logic (cont.)

More precisely, a formal symbolic
logic consists of


A notation (symbols and syntax)
A set of axioms (facts)

A set of inference rules (deduction
sheme)
Types of Symbolic Logic

Propositional Logic – commonly known as
Boolean algebra


First Order Logic


Variables in this algebra  {0,1}
Existential () and Universal () quantifiers
over variables
Higher Order Logic

Existential and Universal quantification over
sets and functions
Key Features of a Theory

Expressiveness


Completeness


what kind of statements can be written
in the theory
all valid formulas are provable
Decidability

there exists an algorithm to deduce the
truth of any formula in the theory
Comparison of Types of Logic

Propositional


First Order Logic


Least expressive, decidable, complete
More expressive, decidable, not
complete
Higher Order Logic

Most expressive, not decidable, not
complete
First-Order Logic

First-Order Logic


theory in symbolic logic
formalizes quantified statements
 E.g: “there exists an object such that…” or “for all
objects, it is the case that…”

Statements in first-order logic built using:






Variables
x, y
Constants
0, C, 
Functions
f (x ), x + y
Predicates
p (x ), x > y, x = y
Boolean connectives
, , , 
Quantifiers
, 
First-Order Theories

A first-order theory


a set of first-order statements about a related
set of constants, functions, and predicates.
Example:

A theory of arithmetic might include the
following statements about 0 and +:
x. ( x + 0 = x )
x,y. (x + y = y + x )
Validity of an Expression in a Theory

Given


a theory and an expression
Using
building blocks of that theory
 if expression evaluates to true for every
evaluation, then it is valid in the theory


Example:

X >= 0 is a valid expression in positive
real arithmetic
Validity Checking & Undecidability

Given a theory  in first-order logic and an
expression  we wish to prove that  is valid
in 


Validity of  in  is written as  |= 
Undecidability of FOL
 A well-known result states that in general it is undecidable
whether a FOL formula is valid in a given first-order theory

However with appropriate restrictions on  and 
the theory can be made decidable and practical
programs can be built around it
Why is Validity Checking Interesting?

Many practical problems can be posed as
formulas in a certain theory or a
combination of theories
 A proof of validity of this formula results in a
solution to the problem

Example
 Scheduling of complex operations in a manufacturing
plant
Proof System

A formal proof in a theory



Rewrite rules


deal with mechanical massaging of statements
irrespective of the meaning of the statements
Key requirement of a proof system


a set of steps of statements
a statement is derived from a previous statement
using a set of defined inference rules
soundness : all provable formulas are logically true!
A proof system need not be complete or
decidable in order to be useful!
Theorem Prover

Given





a theory, a proof system, and a formula whose
validity must be proved
a system that allows a user to carry out the
proof is broadly classified as a theorem prover
Expressive, high abstraction, powerful
reasoning,

Low automation, expert knowledge required
Automated vs. Interactive Theorem
Proving


John Rushby, SRI, asserts “There are no fully
automatic theorem provers”.
For best performance most theorem provers require
user guidance


Variable ordering, weighting of literals, function symbols,
strategy selection, orientation of equations, invention of
ordering lemmas, induction hints
Automation in certain domains




Propositional logic using SAT solvers, BDDs
Linear arithmetic – integer/linear programming
Temporal logic – model checking
Induction (Boyer-Moore heuristics)
Well-known Theorem Provers






Higher order: ALF, Alfa, Coq, HOL, PVS
Logical Frameworks: Isabelle, LF, Twelf
Inductive: ACL2, Inka
Automated: Gandalf, TPS, Otter, Setheo,
SPASS
EQP, Maude, SVC
Provers used in Verification – HOL, PVS,
ACL2, SVC
PVS
• Owre et al., 1992 at SRI
System
model
PVS File
Proofs
Formulae
To be
proved
• System model is translated either automatically or manually
• Formulae to be proved are also translated from their native forms
• Proofs are usually carried out interactively
A
PVS Methodology

Automate everything that is decidable


Heuristic automation for obvious cases for
other problems



Propositional calculus, linear arithmetic, finitestate model checking
Hashing conditional rewriter integrated with
decision procedures
Instantiation, induction
Human control for things difficult to
automate


Sequent calculus presentation
Tactic language for defining higher-level proof
strategies
Stanford Validity Checker (SVC)

Started with applications to processor
verification



[Burch and Dill ‘94]
[Jones et al. ‘95]
Applications since release





Symbolic simulation [Su et al. ‘98]
Software specification checking [Park et al. ‘98]
Infinite-state model checking [Das and Dill ‘01]
Theorem prover proof assistance [Heilmann ‘99]
Integration into programming languages [Day et al. ‘99]
Overall Flow of SVC

Given an expression whose validity has to be
checked


Choose an atomic formula ƒ in the expression
Case split on the atomic formula
 Create two subformulas, for ƒ = 0, and ƒ = 1
 Simplify the two subformulas
 Iteratively check the validity of the two subformulas

Example: ( x

< y)
(y < x)

(x=y)

Choose f = ( x < y )

Case split on f to get two subformulas
( true )  ( y < x )
Simplify
( false )  ( y < x )  ( x = y )
true
Simplify
(y < x)

(x=y)
Example (cont.)
(y < x)  (x=y)
 Choose f = ( y < x )
 Case splitting gives two subformulas
true  ( x = y ) Simplify
false  ( x = y )
Simplify
true
(x=y)
 Case splitting on f = ( x = y ) gives two subformulas
true and false
 The only false node is formula:
(x  y)  (y  x)  (x y)
If this formula can be proved unsatisfiable by theory then formula is a
tautology
How is Validity Checking done?

The validity checker is built on top of a
core decision procedure for satisfiability
in a theory  of a set of literals in the
theory


A literal is an atomic formula or its negation
The core decision procedure depends on the
theory
 e.g. a Satisfiability Solver can be used for
Propositional Logic

SVC proposes


use of several decision theories targeted to
several theories
gives the strongest decision procedure
Need for Multiple Theories

Why are multiple theories needed?

Most real-life problems when cast into a
problem of proving validity of a formula span a
set of theories





Propositional logic
Bit-vectors and arrays
Real, linear arithmetic
Uninterpreted functions & predicates
A decision procedure is

a combination of theories must be able to
combine decision procedures that reason in
each of these theories!
How are Multiple Decision
Procedures Combined?

Shostak’s method


1984 [Shostak ‘84]
Future corrections & clarifications
 [Cyrluk et al. ‘96]
 [Ruess and Shankar ‘01]

Used in several automated deduction systems

PVS, STeP, SVC
Shostak’s Method - Canonizer

Two main components


Canonizer
Solver

The canonizer rewrites terms into a unique
form
  = a = b  canon (a ) = canon (b )
 Examples of canonizer for linear
arithmetic


canon (a + a ) = 2a
Unique ordering of variables
 canon ( b + a ) = a + b
Shostak’s Method – Solved form

Solved form – A set of equations  is said
to be in solved form if the variable on the
LHS of each equation appears in the set
only once



a=x+c
b = 2x + y
Not in solved form: a = b + c
x=a+z
In solved form:
R   denotes replacement of each LHS
variable occuring in any RHS with its
equation

R (a + x) = 2b + 2c + z
Shostak’s Method - Solver

The solver transforms an equation into a
corresponding set of equations in solved
form
 If  = a  b , then solve (a = b ) = { false }

Otherwise:
 solve (a = b ) = a set of equations  in solved form
  = (a = b  x.  )
– x is a set of fresh variables appearing in  , but not in a
or b.

Example: solver for real linear arithmetic


solve (x - y - z = 0 ) = { x = y + z }
solve (x + 1 = x - 1 ) = { false }
The Simplified Shostak Algorithm

Simplified algorithm due to Clark Barrett,
NYU
 Given a set of equations E and
disequations D


Step 1: Use the solver to convert E into an
equisatisfiable set of equations E’ in solved
form
Step 2: Use this set of equations together with
the canonizer to check if any disequality is
violated
 For each a  b D
 Check if canon (E’ (a ) ) = canon (E’ (b ) )
Combining Theories in Shostak

Two Shostak theories 1 and  2 can be
combined under certain restrictions to
form a new Shostak theory  =  2   2

Solvers from two theories can be
combined
 Treating terms from other theory as variables
 Repeatedly apply solvers from each theory until
resulting set of equations is in solved form
 Issue of applying appropriate constraints to the
variables from other theories is tricky
Summing Up

A very powerful proof methodology


Proof is not fully automatic




can describe and prove properties on the most wide array
of systems and abstractions
requires manual intervention
requires expertise and understanding of modeling
concepts
some proofs may not terminate
Industrial use



not yet very popular
some dedicated specialized use in Intel
need to make procedure more automated for practical use
Tutorial Outline

Introduction

Design flow and verification issues

Theorem proving and Decision procedures

Formal verification techniques for FSM models

model checking and formal engines

model checking in practice

equivalence checking

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
Model Checking and Formal
Engines
Fujitsu Labs. of America
Sunnyvale, California
Section Outline
Introduction to Model Checking
 Engines for model checking

SAT Solvers
 Binary Decision Diagrams (BDDs)

Symbolic Model Checking with BDDs
 Model Checking using SAT

Sequential Logic Circuit
Y=c(ab)
a
AND
OR
b
y
c
NOT
XOR
Combinational Circuit
Logic Gates
INPUTS
a
o
c
Present
states
b
p
Q D
OUTPUTS
Comb
Logic
LATCHES
n
Sequential Circuit
Next
States
Temporal Logic Model Checking
Verify Reactive Systems

Construct state machine representation of reactive system
 Nondeterminism expresses range of possible behaviors
 “Product” of component state machines


Express desired behavior as formula in temporal logic (TL)
Determine whether or not property holds
FSM
rep.
Traffic Light
Controller
Design
True
Property
in some
TL e.g.
CTL, LTL
“It is never
possible to
have a green
light for both
N-S and E-W.”
Model
Checker
False
+ Counterexample
MoC: Finite State Machine (FSM)
0/0
INPUTS
Present
states
Comb
Logic
OUTPUTS
Next
States
LATCHES
Mealy FSM: I, S, , S0, O, )






I: input alphabet
S: finite, non-empty set of states
 : S  I  S, next-state function
S0  S : set of initial (reset) states
O: output alphabet
 : S  I  O , output function
S1
1/1
1/1
0/0
S3
S2
1/0
0/0
State Transition Graph
x=0
x=1
S1
S1,0
S2,1
S2
S1,0
S2,0
S3
S3,0
S1,1
State Transition Table
Classification of Properties

Safety Properties



“undesirable things never happen”
“desirable things always happen”
Examples:
 A bus arbiter never grants the requests to two masters
 Elevator does not reach a floor unless it is requested
 Message received is the message sent

Liveness (progress) properties



“desirable state repeatedly reached”
“desirable state eventually reached”
Examples:
 Every bus request is eventually granted
 A car at a traffic light is eventually allowed to pass
Fairness Constraints



In general, not every run in a state machine is a
valid behavior
But all runs are included in transition systems
Fairness constraints rule out certain runs


Fairness constraints can be expressed as CTL
formulas



Properties only checked for fair paths !!
A fair path must satisfy each formula infinitely often
Path quantifiers are restricted to only the fair paths
Fairness examples


An arbiter arbitrates each requestor infinitely often
In a communication channel a continuously transmitted
Temporal Logics

Precise specification
 Amenable to symbolic manipulation
 Two kinds of temporal logics (TL)

Linear Temporal Logic (LTL):
 Time is a linear sequence of events

Branching time temporal logic (CTL, CTL*):
 Time is a tree of events

CTL




Formulae describe properties of computation trees
Computation Trees are obtained by unwinding the
transition system model
Branching structure due to nondeterminism
CTL* is more powerful, includes CTL and LTL
Syntax of CTL

Every atomic proposition is a CTL formula
 If f and g are formulae then so are










f, (f  g), (f  g), (f  g), (f  g)
AG f - in all paths, in all state f (in all future, f)
EG f - in some path, in all states f
AF f - in all paths, in some state f (in every future f)
EF f - in some future f
A(f U g) - in all paths, f holds until g
E(f U g) - in some path, f holds until g
AX f - in every next state, f holds
EX f - in some next state f holds
Examples




AG ¬ (EW_light_go  NS_light_go)
AG (car_at_intersection  AF(light_green))
AG (EF Restart)
CTL model checking
(Clarke and Emerson, Quielle and Sifakis)
M╞ F

M: Transition System, F: CTL formulae
 M defines a tree (unwind the transition system)
 F specifies existence of one or all paths satisfying
some conditions
 Verification checks whether these conditions hold
for the tree defined by M
 Specifically:


Compute the set of states of M satisfying F
M ╞ F iff initial states of M are in this set
CTL Model Checking Example 1
P
Compute EFp
EF p = p  EX(p)  EX(EX(p))  . . .
STEP 1
STEP 2
P

STEP 3
Fixed Point !!
P


P
Computation terminates
EF p Holds in all green states
Computation involves


Calculation of Strongly
Connected Subgraphs (cycles)
CTL Model Checking Example 2
P
P
P
Compute EG p
EG p = p  EX p  EX(EX p)  . . .
STEP 2
Initial:STEP 1
P
P
P
STEP 3
P
P
P
P
TERMINATION
P
P
P
P
P
Checking Safety Conditions
S0: Initial states



Start from S0 (Z) = set of initial (bad) states
Perform forward (backward) reachability analysis
to compute a reached state set R (B)
Determine if R (B) intersects Z (S0)
S
S0
0
B
R
Z
Forward Reachability
Z
Backward Reachability
Model Checking Approaches

Explicit state model checking




Requires explicit enumeration of states
Impractical for circuits with large state spaces
Useful tools exist: Murphi, SPIN, …
Symbolic model checking


Representation & manipulation in terms of
ROBDDs or SAT solvers
 Represent state machine (transition relation) and sets of
states
 Forward/backward reachability analysis
 Based on computing image/pre-image of set of states
 Fixed-point detection
Section Outline
Introduction to Model Checking
 Engines for model checking

SAT Solvers
 Binary Decision Diagrams (BDDs)

Symbolic Model Checking with BDDs
 Model Checking using SAT

SAT Problem Definition
Given a CNF formula, f :
(a,b,c)
A set of variables, V
 Conjunction of clauses
(C1,C2,C3)
 Each clause: disjunction of literals over V

Does there exist an assignment of Boolean values to
the variables, V which sets at least one literal in each
clause to ‘1’ ?
Example : (a  b  c)( a  c)( a  b  c)
C1
C2
C3
a=b=c=1
DPLL Algorithm for SAT
[Davis, Putnam, Logemann, Loveland 1960,62]
Given : CNF formula f(v1,v2,..,vk) , and an
ordering function Next_Variable
Example :
0
(a  b)( a  c)( a  b)
C1
C2
C3
CONFLICT!
0
b
a
1 0
1
c
1
  
SAT!
C1
C3 C2
DPLL Algorithm: Unit Clause Rule
Rule: Assign to true any single literal clauses.
(a  b  c)
=
=
c=1
0 0
Apply Iteratively: Boolean Constraint Propagation (BCP)
a (a  c)( b  c)( a  b  c)( c  e)( d  e)( c  d  e)
c(b  c)( c  e)( d  e)( c  d  e)
e( d  e )
Anatomy of a Modern SAT Solver
DPLL Algorithm
Clause database
management
useless
clauses (e.g. inactive
or large clauses)
Efficient BCP
Search Restarts
SAT
Solver
To
choices in variable
ordering
Restart
algorithm
“periodically”
Efficient garbage
collection

Retain
some/all
recorded clauses
Conflict-driven
learning
Conflict Driven Search Pruning (GRASP)
Silva & Sakallah ‘95
x1
 Non-chronological backtracking
 Conflict-clause recording
x2
2
xj
SAT
xk-1
xk
1

Conflict Analysis: An Example
Current Assignment:
{x9  0 @1, x10  0 @ 3, x11  0 @ 3, x12  1@ 2, x13  1@ 2}
Decision Assignment:
   ( x1  x 2)
 2  ( x1  x 3  x 9)
 3  ( x 2  x 3  x 4)
 4  ( x 4  x 5  x10)
{x1  1@ 6}
 5  ( x 4  x 6  x11)
 6  ( x 5  x 6)
 7  ( x1  x 7  x12)
 8  ( x1  x8)
 9  ( x 7  x8  x13)
Example Contd: Implication Graph
x2= 1@6
1
x1= 1@6
2
2
3
3
x3= 1@6
x9= 0@1
x10= 0@3
4
4
x5= 1@6
6
x4= 1@6
5
6
5

x6= 1@6
x11= 0@3
c(3((xx12xx93 xx104) x11)
Example Contd….
x9= 0@1
x8= 1@6
8
9
x1= 0@6
x10= 0@3
x11= 0@3
7
7
x12= 1@2

9
Decision
Level
9
3
x13= 1@2
5
x7= 1@6
x1
6
Variable Ordering

Significantly impacts size of search tree
 Ordering schemes can be static or
dymamic
 Conventional wisdom (pre-chaff):
Satisfy most number of clauses OR
 Maximize BCP
 e.g. DLIS, MOMs, BOHMs etc.

Variable Ordering: New Ideas

New wisdom: Recorded clauses key in
guiding search
 Conflict-driven variable ordering:



Semi-static in nature, for efficiency


Chaff (DAC’01): Pick var. appearing in most
number of recent conflict clauses
BerkMin (DATE’02): Pick var. involved in most
number of recent conflicts
Statistics updated on each conflict
Side-effect: Better cache behavior
Efficient Boolean Constraint Propagation


Observation: BCP almost 80% of compute time,
under clause recording



Each clause: Counter for #literals set to false
Assgn. to variable ‘x’: Update all clauses having x, x
New Idea: Only need to monitor event when # free
literals in a clause goes from 2 to 1

Need to watch only 2 literals per clause : SATO
(Zhang’97),Chaff (DAC’01)
x1 … w1
…
w2 … xk
SAT Solvers Today

Capacity:



Formulas upto a million variables and 3-4
million clauses can be solved in few hours
Only for structured instances e.g. derived from
real-world circuits & systems
Tool offerings:

Public domain





GRASP : Univ. of Michigan
SATO: Univ. of Iowa
zChaff: Princeton University
Commercial
 PROVER: Prover Technologies
Section Outline
Introduction to Model Checking
 Engines for model checking

SAT Solvers
 Binary Decision Diagrams (BDDs)

Symbolic Model Checking with BDDs
 Model Checking using SAT

Decision Structures
Truth Table
x1 x2 x3
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1




Decision Tree
x1
f
0
0
0
1
0
1
0
1
x2
x3
0
x2
x3
0
0
x3
1
0
x3
1
0
1
Vertex represents decision
Follow green (dashed) line for value 0
Follow red (solid) line for value 1
Function value determined by leaf value
Source: Prof. Randal Bryant
Variable Ordering

Assign arbitrary total ordering to variables


e.g. x1 < x2 < x3
Variables must appear in ascending order along all
paths
Not OK
OK
x1
x1

x1
x2
x2
x3
x3
x3
x1
x1
Properties


No conflicting variable assignments along path
Simplifies manipulation
Reducing OBDDs: Rule #1
Eliminate duplicate terminals
1
1
1
x1
x1
x2
x3
0
x2
x3
0
0
x2
x3
1
0
x3
1
0
x3
1
x2
x3
0
x3
x3
1
Reducing OBDDs: Rule #2
Merge isomorphic nodes, i.e. nodes that reference the same
variable and point to the same successors
a
0
1 0
a
a
1
b
c
0
b
x1
x2
x3
0
c
x1
x2
x3
1
x3
x3
1
x2
x2
x3
x3
0
1
Reducing OBDDs: Rule #3
Eliminate a node if its 0 and 1 edges lead to the same node
a
0
1
b
0
b
0
1
1
x1
x1
x2
x2
x3
x3
0
1
x2
x3
0
1
Example OBDD
Reduced Graph
Initial Graph
x1
x1
x2
x3
0

0
0
x2
x2
x3
x3
1
0
x3
x3
1
(x1 +x2 )· x3
0
1
0
1
Canonical representation of Boolean function


For given variable ordering
Two functions equivalent if and only if graphs
isomorphic
 Can be tested in linear time

Desirable property: simplest form is canonical.
Effect of Variable Ordering
(a1  b1)  (a2  b2 )  (a3  b3 )
Good Ordering
a1
a1
b1
a2
a2
a3
a2
a3
a3
b2
b1 b1 b1 b1
a3
b2 b2
b3
0
a3
b3
1
Linear Growth
0
1
Exponential Growth
ROBDD sizes & Variable Ordering




Good News 




Finding optimal variable ordering NP-Hard
Some functions have exponential BDD size for all
orders e.g. multiplier
Many functions/tasks have reasonable size ROBDDs
Algorithms remain practical up to 500,000 node OBDDs
Heuristic ordering methods generally satisfactory
What works in Practice 


Application-specific heuristics e.g. DFS-based ordering
for combinational circuits
Dynamic ordering based on variable sifting (R. Rudell)
Variants of Decision Diagrams






Multiterminal BDDs (MTBDD) – Pseudo Boolean functions Bn
 N, terminal nodes are integers
Ordered Kronecker FunctionalDecision Diagrams (OKFDD) –
uses XOR in OBDDs
Binary Moment Diagrams (BMD) – good for arithmetic
operations and word-level representation
Zero-suppressed BDD (ZDD) – good for representing sparse
sets
Partitioned OBDDs (POBDD) – highly compact
representation which retains most of the features of
ROBDDs
BDD packages –
 CUDD from Univ. of Colorado, Boulder,
 CMU BDD package from Carnegie Mellon Univ.
 In addition, companies like Intel, Fujitsu, Motorola etc.
have their own internal BDD packages
Section Outline
Introduction to Model Checking
 Engines for model checking

SAT Solvers
 Binary Decision Diagrams (BDDs)

Symbolic Model Checking with BDDs
 Model Checking using SAT

Symbolic Manipulation with OBDDs

Strategy

Represent data as set of OBDDs
 Identical variable orderings



Express solution method as sequence of
symbolic operations
Implement each operation by OBDD manipulation
Key Algorithmic Properties



Arguments: OBDDs with identical variable orders
Result is OBDD with same ordering
Each step polynomial complexity
If-Then-Else Operation

Concept

Basic technique for building OBDD from logic
network or formula.
Arguments I, T, E
I  T, E
I
X
T
1
M UX
E
0
Functions over variables X
 Represented as OBDDs

Result
OBDD representing
composite function
 (I T)  (I  E)

If-Then-Else Execution Example
Argument I
Argument T
Argument E
Recursive Calls
1
a B1
A1,B1
A1 a
A2,B2
A2 b
c A6
A3 d
A4 0

A3,B2
B2 d
1 A5
A6,B2 A6,B5
c B5
B3 0
1 B4
A5,B2 A3,B4
A4,B3 A5,B4
Optimizations



Dynamic programming
Early termination rules
Apply reduction rules bottom-up as return from recursive calls
 (Recursive calling structure implicitly defines unreduced BDD)
Derived Algebraic Operations

Other operations can be expressed in
terms of If-Then-Else
If-Then-Else(F, G, 0)
And(F, G)
F  G, 0
F
F
X
X
G
1
M UX
G
0
0
If-Then-Else(F, 1, G)
Or(F, G)
F  1, G
F
F
X
X
1
1
M UX
G
G
0
Generating OBDD from Network
Represent output functions of gate network as OBDDs.
Evaluation
Network
A
A
B
C
T1
T2
Out
T1
B
Out
C
T2






new_var ("a");
new_var ("b");
new_var ("c");
And (A, 0, B);
And (B, C);
Or (T1, T2);
Out
Resulting Graphs
T1
0
A
B
C
a
b
c
1
0
1
0
a
b
b
1
0
a
T2
b
c
1
0
b
c
1
0
1
Characteristic Functions
Concept

A  {0,1}n
 Set of bit vectors of length n

A
Represent set A as Boolean
function A of n variables
0 /1
 X  A if and only if A(X ) = 1
Set Operations
Union
Intersection
A
A
B
B
Symbolic FSM Representation
Nondeterministic FSM
Symbolic Representation
n1
00
n2
01
o1
10

o2
o2
0
1
Represent set of transitions as function (Old, New)


11
o1 ,o2 encoded
old state
n1 , n2 encoded
new state
Yields 1 if can have transition from state Old to state New
Represent as Boolean function

Over variables encoding states
Reachability Analysis



Compute set of states reachable from initial state Q0
Represent as Boolean function R(S)
Never enumerate states explicitly
Given
old state
Compute

state
0/1
new state
Initial
R0
=
Q0
R
0/1
00
01
00
00
00
10
R
R11
R
R00
01
01
R
R22 R3
10
10
11
Ri – set of states that can be reached in i
transitions
 Reach fixed point when Rn = Rn+1


Guaranteed since finite state
Iterative Computation
Ri

old
new
Ri +1

Ri
R0  Q0
do
R i  1( s )  R i ( s )   s ' [ R i ( s ' )   ( s ' , s )]
i  i 1
until R i  R i  1
ALGORITHM
BDD-based MC: Current Status

Symbolic model checkers can analyze
sequential circuits with ~200- 400 flip flops


Challenges



For specific circuit types, larger state spaces have
been analyzed
Memory/runtime bottlenecks
Adoption of TLs for property specification
Frontier constantly being pushed




Abstraction & approximation techniques
Symmetry reduction
Compositional reasoning
Section Outline
Introduction to Model Checking
 Engines for model checking

SAT Solvers
 Binary Decision Diagrams (BDDs)

Symbolic Model Checking with BDDs
 Model Checking using SAT

Solving Circuit Problems as SAT
a
b
h
f
c
d
g
i
e
Input Vector Assignment ?
Primary Output ‘i’ to 1 ?
SAT Formulas for simple gates
a
b
c
a
c
b
( c  a )( c  b )( c  a  b )
b
a
( a  b )( a  b )
a
b
c
( c  a )( c  b )( c  a  b )
Solving Circuit problems as SAT

Set of clauses representing function of each gate

Unit literal clause asserting output to ‘1’
(b  f )( c  f )( b  c  f )
a
b
h
f
(d  g )( e  g )( d  e  g )
(a  h )( f  h )( a  f  h)
c
d
e
g
i
(h  i )( g  i )( h  g  i )
(i )
SAT-based Unbounded MC

SAT-based Inductive reasoning
 Exact image computation


Use SAT solver just for fixed-point detection
Image computation using SAT solver
 McMillan CAV’02, Kang & Park DAC’02

Approximate image computation


Using Craig Interpolants (McMillan CAV’03)
Abstraction refinement
 Discussed later in the tutorial
SAT-Based Reasoning on ILAs
PI
PO
CL
BMC work (Biere et al)
NS
PS
Unroll
k-Steps
 Popularized by the

Current SAT solvers
have shown good results
LATCHES
PI1
PI2
PI
k
PS1
CL1
PO1
NS1
PS2
CL2
PO2
NS2
NSk-1
PS3
PSk
CLk
POk
NSk
SAT-Based Inductive Reasoning
UNSAT
‘1’
I0
Induction
Step
Base Case
P
P
+
AG P is TRUE
UNSAT
‘1’
‘1’
CLk+1
Simple Induction
P
‘1’
SAT-Based Inductive Reasoning
P
I0
Induction
Step
Base Case
P
CL1
P
CL2
CLk
‘1’
P
‘1’
P
P
CL1
‘1’
P
‘1’
CLk
CLk+1
Induction with depth ‘k’
(Sheeran et al, Bjesse et al FMCAD 2000)
P
‘1’
SAT-based Induction

Can prove or disprove properties
 k-depth induction:



Can check more properties for larger k
Base case is simply k-length BMC
But not complete !
Example:
P
Not k - depth
inductive for
any k !
P
Unreachable
state-space
SAT-based Inductive Reasoning

Unique paths Induction:





Restrict unrolled ILA to be loop-free
Sheeran et al, Bjesse et al FMCAD’00
Unrolling upto recurrence diameter
i.e. length of longest loop-free path
Complete!
Summary:



Stronger induction methods can be expensive
But, some properties very easy to prove by
induction, very hard by state-space traversal
Simple induction can be cheap, first step in
verification flow
SAT-based Image Computation
i 0
R 0 ( s )  Init
do
Issues:
 Function representation
 Quantification
R i  1( s ' )  toProp (  s .Tr ( s , s ' )  R i ( s ))
i  i 1
i 1
i
until
R i( s )  P ( s )
or  R
k 0
k
(s) 
R
k
(s)
k 0
SAT
Solver
Bug found !
Fixed-point
Function Representation


Representation of: Init, Ri(s), P(s), Tr(s,s’)
Converted to CNF for SAT solving
b





x



z
y
Reduced Boolean Circuit (RBC)
Abdulla et al, TACAS 2000
b
a
0
1
Boolean Expression Diagram (BED)
Williams et al, CAV 2000
Quantification

Apply definition of existential quantification:
 x . ( x )   ( x  0 )   ( x  1)

Mitigate formula blow-up by:


Inherent reduction rules of function rep.
Special case quantification rules:
Inlining:
 x .( x   )   ( x )   ( ) where
Scope Reduction:
 x . ( x )    (  x . ( x ))  
x  Vars ( )
where x  Vars ( )
 x . ( x )   ( x )  (  x . ( x ))  (  x . ( x ))

Limitation: Suitable only when small number
of variables to be quantified
Exact Image Computation Using SAT
McMillan, CAV 2002
CTL Model Checking
procedure AG(p)
Let Z = Q = p
while Z  1
let Z = (W · pi/si)Q
let Q = Q  Z
return Q
Solve W · f
Solution:


Express f in CNF: fCNF
Drop all wi literals from
fCNF
Core problem: Express a given formula f in CNF
Computing a CNF representation for f
Basic Algorithm
 Create CNF(f)
 Solve SAT(CNF(f)  vf)
 Compute blocking
clause for each
satisfying assignment
 Collect all blocking
clauses  fCNF
Blocking Clauses Must:

Contain only original vars. of f
 Be false under some sat. assgn.
of CNF(f)  vf
 Be implied by CNF(f)  vf
e.g. f = (ab)c
a
b
vab
vf
c
CNF(f)
Blocking
Clauses : (bc) , (ac)
fCNF= (bc) (ac)
Some Issues

Issues:

Blocking clauses expressed in terms of original
variables of f
 Extracted from a separate implication graph


Quantification of W vars. can be performed on
each generated blocking clause: Combine both
steps into one!!
Conclusions:



Technique promising where BDDs blow up
BDDs superior as a general purpose technique
Proposed technique needs more polishing
Tutorial Outline

Introduction

Design flow and verification issues

Theorem proving and Decision procedures

Formal verification techniques for FSM models

model checking and formal engines

model checking in practice

equivalence checking

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
Model Checking in Practice
Fujitsu Labs. of America
Sunnyvale, California
Section Outline

Enhancing Formal Verification Capacity






Abstraction-refinement
Assume guarantee/Compositional Reasoning
Approximation
Symmetry reduction
Partial Order Reduction
Formal Verification: Case Study
 Industrial and Research Tool Offerings
Abstraction

Limitations of symbolic model checking

Suffers from state explosion
 Concurrency
 Data domains

Abstraction of state space

Represent the state space using a smaller model
Pay attention to preserving the checked properties.

Do not affect the flow of control.

Example

Use smaller data objects
 HDL Code:
X:= f(m);
Y:=g(n);
if (X*Y>0)
then …
else …
X, Y never
used again.



Assign values {-1, 0, 1} to X and Y.
Based on the following connection:
sgn(X) = 1 if X>0,
0 if X=0, and
-1 if X<0.
sgn(X)*sgn(Y)=sgn(X*Y).
Change f and g to produce abstract
values for X and Y
Abstraction vs. Simplification
Not every simplified system is an
abstraction
 The key question is:


If we prove or disprove a property of the
simplified system, what have we learned
Abstraction (Cont)

Abstract transition relation is conservative


Abstract next states must contain all concrete
successors
And possibly more states
t
RA
s
Abstract
g
g
Concrete
x
y
RC
Abstract Counter-Example

If model checking fails an abstract counterexample is produced
 Concrete transitions are present for each
pair of consecutive abstract states

But concrete counter-example may not be
present!
Spurious Trace
Abstract
Concrete
g
g
g
y’
Real Trace
x
y’’
z
Abstraction Refined

Abstraction is refined by adding more states


prevents previous spurious counter example
Model checking is repeated on new
abstraction

iterate
New Trace
gi
Abstract
Concrete
gi
gi
gi
y1
Real Trace
z
x
y’’
Abstraction Refinement Algorithm
Build
abstract model
Refinement
spurious
counterexample
Model
check
abstract
model
true
True on
concrete model
abstract
counterexample
abort
Infeasible
Check
abstract
counterexample
on concrete
model
real
counterexample
False on
concrete model
Assume-Guarantee Reasoning
P’
Q’


P’
P
Q’
P
+

Q’
P’
Q
Q
Used for:


Compositional reasoning
In a refinement checking
setting
AG Reasoning: Example
•true
 b’ := true
P’
 a’ := true
Q’

X

X
•true
a’  b’ := true
•a’  b’ := false
b’  a’ := true
•b’  a’ := false
P
Q
Assume all variables are initialized to true
AG Reasoning: Example
•true  b’ := true
P’
•true  a’ := true
Q’




a’  b’ := true
•a’  b’ := false
•true  a’ := true
P
Q’
•true  b’ := true
b’  a’ := true
•b’  a’ := false
P’
Assume all variables are initialized to true
Q
Symmetry Reductions
Idea: If state space is symmetric, explore only
a symmetric “quotient” of the state space
A permutation function f : S  S
is an automorphism if:
( x, z)  ( f(x), f(z))
0,0
0,1
1,0
e.g. f: f(0,0) = 1,1 f(1,1) = 0,0
f(0,1) = 0,1 f(1,0) = 1,0


The set of all automorphisms forms
a group!
Automorphism groups induce
equivalence classes (orbits) over
the state space
1,1
e.g. [ (0,0), (1,1) ]
[ (0,1), (1,0) ]
Quotient Machine
0,0
0,1
[ (0,0),(1,1) ]
1,0
[ (0,1), (1,0) ]
1,1

Map each state to its representative in
the orbit
Quotient machine
 Do model checking analysis on the
quotient machine
Symmetry Reductions

Difficulty



Solutions



Identifying symmetry groups
Computing function mapping states to
representatives
Ask user to tell you where the symmetries are
Be satisfied with multiple representatives for
each orbit
Tools: Mur, Nitpick, RuleBase
Over-Approximate Reachability

Exact reachability captures the exact reachable
state-space


Problem: BDD-blowup in many real designs
Solution: Compute over-approximation of the
reachable states
R+
R2
I0
Error could be a false
negative

+
R1+
R1
R2
Often sufficient to prove
properties correct

R
and runtime with accuracy
of approximation

Over-approx. Reachability Techniques

Split FSM into submachines by
partitioning statevars
 Perform (exact)
reachability on each
machine
 Cross-product of
reached state
spaces gives overapprox. for original
FSM
PI
PS
CL
LATCHES
X
PO
NS
X
Over-approx: More Issues

Communicate between sub-FSMs to
tighten approximation



After each time-frame
After least fixpoint computation for a sub-FSM
Computing partitions



Heuristic and/or manual
Disjoint partitions (Cho et al)
Overlapping partitions (Govindaraju et al)
 Can capture limited interaction among sub FSMs
 Tighter approximation
 May not increase BDD sizes much
Partial Order Reduction

Factor out independent state transitions!
(typical for asynchronous communication
as in Software- and Protocol-Checking)
 May result in exponential reduction
in the number of states
Process
A
transition
Process
B
transition
One Relevant Trace of States
Synchronization
Case Study: VGI
VGI = “Video-Graphics-Image”
 Designed by Infopad group at Berkeley
 Purpose: web-based image processing
 Designed using



VHDL (control)
Schematics (Data path)
Source: T. Henzinger, X. Liu, S. Qadeer and S. Rajamani,
“Formal Specification and Verification of a Dataflow Processor
VGI Architecture

16 clusters with 6 processors in each - 4
compute, 1 memory, 1 I/O
 ~30K logic gates per processor
 Pipelined compute processors
 Low latency data transfer between
processors - complex control
VGI Architecture
pipeline
pipeline
pipeline
pipeline
Complex handshake
pipeline
FIFO buffer
ISA
ISA
ISA
ISA
ISA
Verification

Different time scales
 Implementation




two-phase clock
level-sensitive latches
activity on both HI and LO phases of clk
Specification

no clk signal
Sample Operator
S

I

I’ = Sample I at 
Runs of I’ = Runs of I sampled at instances
where  holds
Sample Operator
ISA
ISA
ISA
ISA
ISA

pipeline
pipeline
pipeline
pipeline
pipeline
clk
Difficulty - Verification

Size of the VGI chip
~800 latches in each compute
processor
 64 compute processors


Need “divide and conquer”
assume guarantee reasoning
 compositional reasoning
 refinement checking

Compositional Reasoning
Q’
P’

P
TP

P’
Q’

TP
TQ
Q
TQ

P
P’
P
Q

Q’
Q


Mocha
Reactive Modules - language for
describing designs
 Assume-guarantee refinement
checking
“Mocha: modularity in model
checking,” Alur, Henzinger, Mang,

Network of Processors to Single Processor
pipeline

ISA
clk
pipeline


pipeline
ISA
clk
ISA

pipeline
clk
clk

pipeline
clk
ISA
ISA
Single Processor
pipeline

ISA
TP
clk

Single processor still has ~800 latches
 Need “divide-and-conquer” again
Refinement Map
Input from
upstream processor
ISA
REGFILE
FIFO
buffer

Input from
upstream processor
REGFILE
ALU
Spec
OP
GEN
P
I
P
E
ALU
Gate
Level
Comm
Stage
clk
Decompose Proof
Input from
upstream processor
ISA
REGFILE
OP
GEN

Input from
upstream processor
REGFILE
ALU
Spec
ALU
P
I
P
E
FIFO
buffer
AbsAluOut
ALU
Gate
Level
Comm
Stage
clk
Abstraction and Compositional
Reasoning
aluOutSpec
 stall  aluOutImpl’ = aluOutImpl
 stall  aluOutImpl’ = aluOutSpec
much smaller than
stall
AbsAluOut
REGFILE
aluOutImpl
P
I
P
E
ALU
Gate
Level
Property: Communication Control
FIFO
buffer

AbsAluOut
Comm
Stage
TP
clk
Refinement Checking
Input from
upstream processor
ISA
REGFILE
ALU
Spec
OP
GEN
AbsOps
FIFO
buffer
AbsAluOut
Input from
upstream processor
REGFILE
P
I
P
E
ALU
Gate
Level
Comm
Stage
clk
Results

Circular assume-guarantee



AbsOps used to verify AbsAluOut
AbsAluOut used to verify AbsOps
All lemmas (except one) checked by
Mocha in a few minutes
 3 bugs in communication control found
and fixed
 Abstraction modules crucial - designer
insight needed
Model Checkers - Research Tools
 VIS (UC Berkeley)
 Mocha (UC Berkeley, compositional
reasoning)
 Verisoft (AT&T, C-model checker)
 Bandera (Java model checker)

Industrial Model Checkers





COSPAN (from Bell Labs) under the hood
Support for arbitrary precision arithmetic
Automatic bus-contention, multi-driver
violation, tri-state support fir high-impedance
check
Powerful design reduction techniques
 Removal of portions of design that do not affect the
property being proven
 Constant & constraint propagation
 Iterative reduction guarantees that queries proven on
the reduced design hold on the entire design
EDA Vendor Tool Offerings

 IBM RuleBase
 @HDL - @Verifier





Real Intent Verix


works on Verilog RTL
static assertion checks
automatic property extraction
clock domain synchronization errors
slimilar to @Verifier
Averant Solidify
 Jasper design automation Jasper Gold
Tutorial Outline

Introduction

High level design flow and verification issues

Theorem proving and Decision procedures

Formal verification techniques for FSM models

model checking and formal engines

model checking in practice

equivalence checking

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
Equivalence Checking
Fujitsu Labs. of America
Sunnyvale, California
Formal Equivalence Checking
Given two designs, prove that for all possible input
stimuli their corresponding outputs are equivalent
Design A
=?
Yes/No
Input
Design B
Product
Machine
Formal Equivalence Checking

Equivalence checking can be applied at or
across various levels
System
Level
Gate
Level
Device
Level
RTL
Combinational Equivalence Checking (CEC)






Currently most practical and pervasive
equivalence checking technology
Nearly full automation possible
Designs of up to several million gates
verified in a few hours or minutes
Hierarchical verification deployed
Full chip verification possible
Key methodology: Convert sequential
equivalence checking to a CEC problem!

Match Latches & extract comb. portions for EC
CEC in Today’s ASIC Design Flow
RTL Design
Synthesis &
optimization
CEC
Routing
CEC
DFT insertion
CEC
IO Insertion
CEC
Placement
Clock tree synthesis
CEC
CEC
ECO
CIRCUIT 1
Solving Seq. EC as Comb. EC
PI
Comb.
Logic
PS1
PO
NS1
PI
MATCH
LATCHES
CIRCUIT 2
Combinational Equiv.
Checking
PS
PS2
PI
Comb.
Logic
NS2
PO
PO
Comb.
Logic 1
NS
Comb.
Logic 2
NS
?
PO =
Methods for Latch Mapping

Incomplete Methods


Regular expression-based using latch names
Using simulation (Cho & Pixley ‘97):
 Group latches with identical simulation signatures



Group based on structural considerations e.g.
cone of influence
Incomplete run of complete method below
(Anastasakis et al DAC ‘02)
Complete Methods

Functional fixed-point iteration based on Van
Eijk’s algorithm (van Eijk ’95)
Van Eijk’s Method for Latch Mapping
Initial Latch
Mapping
Approximation
PI
Apply Latch
Mapping
Assumptions
Iterate
PS
No
Comb.
Logic 1
NS
Comb.
Logic 2
NS
?
=
Fixed-point ?
Yes
Done !!
Verify Latch
Mapping
Assumptions
CEC in Practice
Key Observation: The circuits being verified usually
have a number of internal equivalent functions
f
’
x1’x ’
2
f
i1 i2
Check
x1(i1, i2,…in) = x1’(i1, i2,…in)
x2(i1, i2,…in) = x2’(i1, i2,…in)
x k’
x1 x
2
To prove f (i1, i2,…in) = f’ (i1, i2,…in)
xk
xk(i1, i2,…in) = xk’(i1, i2,…in)
in
f(x1, x2,…xk) = f’(x1’, x2’,…xk’)
Internal Equiv. Based CEC Algorithm
PI
PS
Comb.
Logic 1
Comb.
Logic 2
PO
NS
Random
Simulation
=?
PO
NS
Gather PENs
Combinational miter
after latch mapping
PENs: Potentially
equivalent nodes, i.e.
nodes with identical
simulation signature
Is there an
unjustified PEN
pair x,y ?
No
Done!
Yes
Refine PEN sets
using counterexample
No
Is
x=y?
Yes
Structurally
merge x,y
Anatomy of Typical CEC Tools
Circuit A
Quick
Synthesis
Latch
Mapper
Circuit B
Counter-example
viewer
Error Diagnosis
Engine
Structural methods
Learning
BDD
SAT
ATPG
Major Industrial Offerings of CEC




Formality (Synopsys)
FormalPro (Mentor Graphics)
Typical capabilities of these tools:





Can handle circuits of up to several million gates flat
in up to a few hours of runtime
Comprehensive debug tool to pinpoint error-sources
Counter-example display & cross-link of RTL and gatelevel netlists for easier debugging
Ability to checkpoint verification process and restart
from same point later
What if capability (unique to FormalPro)
Reachability-based Equivalence Checking
Inputs
S0
M1
Outputs
=?
M2
S1


S0
S1


Product Machine
Build product machine of M1 and M2
Traverse state-space of product
machine starting from reset states
S0, S1
Test equivalence of outputs in each
state
Can use any state-space traversal
technique
Move to System-Level Design
System-level design
Manual effort
Architecture exploration
System-level design
Methodology!
RTL
Current design
iterations
Gate-level design
System-Level Verification
System-level design
Property & Model
Checking
Manual effort
RTL
Gate-level design
Equivalence
Checking
System-level Synthesis
System-level design
Manual effort
RTL
Gate-level design
Automatic
Synthesis
Scope of Today’s EC Technology
Re-encoding of state
scheduling
Sequential
differences
Serial vs parallel interfaces
pipelining
FFs match
Combinational
EC
Data representation differences
Identical data types Composite data types
Bit-accurate
Precision/rounding
differences
Upcoming: Transaction-Based EC

The states in RTL that correspond to states
in system-level model (SLM), are referred
to as synchronizing states
Complete
Verification
SLM
Refinement mapping
or
Sequential
counterexample
RTL
Transient states
Transactions : State View
SL
transaction
SLM
Refinement mapping
RTL transaction


RTL
Encapsulates one or more units of
computation for the design being verified
Self-contained since it brings the machine
back to a synchronizing state
Equivalence Checking: Conclusions

Currently equivalence checking based on
latch mapping+CEC




Methodology premised on structural similarity
of designs under comparison
Efficient & scalable
Maybe applicable to system-level problems
More general sequential EC somewhat
less explored
 Might be needed for system level
equivalence checking
Tutorial Outline

Introduction

High level design flow and verification issues

Theorem proving and Decision procedures

Formal verification techniques for FSM models

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
Semi-Formal Verification
Techniques
Fujitsu Labs. of America
Sunnyvale, California
The need for Semi-formal Verification
C
o
v
e
r
a
g
e
Existing
Formal
FV Augmented
Simulation
(Falsification)
Verification
Manual test
w/ coverage
2K
20K
FV
+ Complete coverage
- Limited capacity
200K
2M
FV Augmented Simulation
+ Good coverage
+ Good capacity
Random
simulation
GATES
Simulation
+Unlimited capacity
- Poor coverage
Section Outline
Symbolic Simulation
 Symbolic Trajectory evaluation (STE)
 SAT-based Bounded Model Checking
 Sequential ATPG-based model
checking
 Guided Search
 Smart Simulation

Symbolic Simulation

Conventional Simulation:

Input & outputs are constants (0,1,X,…)
1 a
0 b
1

c
1
a·b+b·c
Problem: Too
many constant
input combinations
to simulate !!
Symbolic simulation:



Symbolic expressions used for inputs
Expressions propagated to compute outputs
Equivalent to multiple constant simulations !!
Ref: Prof. David Dill, CAV ’99c
Symbolic Simulation (sequential case)
a0
a0·b0 + b0 ·c
a1
a1·b1+
b1·(a0·b0 + b0·c)
b1
b0
Time
frame 1
c
Time
frame 2
a0·b0 + b0·c
a1
a0
b1
b0
c
a1·b1+
b1·(a0·b0 + b0·c)
Unrolled Circuit
Symbolic Simulation
Inputs can be
constants
1
b+c
b
a
a·b + b ·c
b
c
c



a+b
Simulate certain set
Input
expressions
of patterns
b
can
be
related
Model signal
correlations
Can result in simpler
output expressions
b+c
c
Symbolic Simulation

Use BDDs as the symbolic representation
 Work at gate and MOS transistor level
 Can exploit abstraction capabilities of ’X’
value




Can be used to model unknown/don’t care
values
Common use in representing uninitialized state
variables
Boolean functions extended to work with {0,1,X}
Two BDD (binary) variables used to represent
each symbolic variable
Symbolic Simulation




Can handle larger designs than model checking
Can use a large variety of circuit models
Possibly more natural for non-formalists.
Amenable to partial verification.


Not good with state machines (possibly better
with data paths).
Does not support temporal logic
 Requires ingenuity to prove properties.
Practical Deployment

Systems:




COSMOS [bryant et al], Voss[Seger et al Intel]
Magellan [Synopsys]
Innologic
Exploiting hierarchy

Symbolically encode circuit structure
 Based on hierarchy in circuit description

Simulator operates directly on encoded circuit
 Use symbolic variables to encode both data values &
circuit structure


Implemented by Innologic, Synopsys (DAC ‘02)
Greatest success in memory verification
(Innologic)
High-level Symbolic Simulation





Data Types: Boolean, bitvectors, int, reals, arrays
Operations: logical, arithmetic, equality,
uninterpreted functions
Final expression contains variables and
operators
Coupled with Decision procedures to check
correctness of final expression
Final expressions can also be manually checked
for unexpected terms/variables, flagging errors
e.g. in JEM1 verification [Greve ‘98]
High-level Symbolic Simulation

Manipulation of symbolic expressions done
with



Rewrite systems like in PVS
Boolean and algebraic simplifiers along with
theories of linear inequalities, equalities and
uninterpreted functions
Extensively used along with decision
procedures in microprocessor verification




Pipelined processors: DLX
Superscalar processors: Torch (Stanford)
Retirement logic of Pentium Pro
Processors with out of order executions
Section Outline
Symbolic Simulation
 Symbolic Trajectory evaluation (STE)
 SAT-based Bounded Model Checking
 Sequential ATPG-based model
checking
 Guided Search
 Smart Simulation

Symbolic Trajectory Evaluation (STE)

Trajectory : Sequence of values of system
variables


Example: c = AND (a, b) and delay is 1
A possible trajectory : (a,b,c) = (0, 1, X), (1, 1, 0),
(1, 0, 1), (X, X, 0), (X, X, X),…

Express behavior of system model as a set
of trajectories I and desired property as a
set of trajectories S
 Determine if I is inconsistent with S


Inconsistent: I says 0 but S says 1 for a signal
at time t
Consistent: I says 0 but S says X or 0 for a
signal at time t
STE: An Example
4-Bit Shift Register
Din
Specification
Dout
Assert
Din = a  NNNN Dout = a
If apply input a
then 4 cycles later
will get output a
Din
Din
Din
Din
Din
a
X
X
X
X
X
X
a
X
X
X
X
X
X
a
X
X
X
X
X
X
a
X
X
X
X
X
X
a
a
Din = a
Ref: Prof. Randal Bryant, 2002

T=0
Dout
T=1
Dout
T=2
Dout
T=3
Dout
T=4
Dout
NNNN Dout = a
Check
STE: Pros, Cons & Practical Use

model checking
 Disadvantage: Properties checkable not as
expressive as CTL
 Practical success of STE




Verification of arrays (memories, TLBs etc.) in
Power PC architecture
x86 Instruction length decoder for Intel processor
Microprocessor verification
Section Outline
Symbolic Simulation
 Symbolic Trajectory evaluation (STE)
 SAT-based Bounded Model Checking
 Sequential ATPG-based model
checking
 Guided Search
 Smart Simulation

SAT-Based Bounded Model Checking
Property F
FSM M
PI
PO
CL
NS
PS
Example F: AG P
P
?
LATCHES
Question: Does M have a counter-example to F
within k transitions ?
Biere et al (TACAS’99, DAC’99)
SAT-Based Bounded Model Checking
P
P
P
I0
CL1
CL2
CLk
P
‘1’
Unroll M for k time-frames
SAT
CNF
SAT
Solver
UNSAT
Counter-example
to AG P
No Counterexample within k
steps
SAT-based BMC

Primarily used for finding bugs !
 Verification complete if k > sequential
depth of M
Open


Sequential depth computation or tight problem!!
estimation intractable in practice
Can express both safety and liveness
properties (bounded semantics)


All LTL formulas
Unbounded fairness by detecting loops
SAT-based BMC: Pros & Cons



SAT-BMC good at producing counter-examples
upto medium depths (upto 50-60)
SAT solvers less sensitive to size of design:
SAT-BMC can handle larger designs
SAT solvers require less parameter tuning:
Productivity gain
Drawbacks 



Incomplete method: Cannot verify properties
Ineffective at larger depths: Slow, CNF blow-up
Cannot describe all CTL properties
Enhancements to basic SAT-BMC

CNF generation for BMC




Decision variable ordering



Bounded cone-of-influence: Biere et al CAV’99
Circuit-graph compression: Ganai et al VLSID’02
BMC-specific static ordering: Strichman CAV’00
Tuning VSIDS for BMC: Shacham et al MTV’02



Sharing clauses across BMC runs: Strichman CHARME’01
Learning clauses from BDDs: Gupta et al DAC’03
Using BDD reachability over-approx: Cabodi et al DATE’03
Section Outline
Symbolic Simulation
 Symbolic Trajectory evaluation (STE)
 SAT-based Bounded Model Checking
 Sequential ATPG-based model
checking
 Guided Search
 Smart Simulation

Sequential ATPG for MC

Proposed by Boppana, Rajan, Takayama & Fujita, CAV ‘99
e.g. AG p
INPUT
EF p
INPUT
AUTOMATA
CIRCUIT
from property
CIRCUIT
TEST
NETWORK
p
stuck-at fault
s-a-0 fault
vs SAT-BMC
vs BDDs



No explicit storage of
states
No need to build
transition relation
Good balance of DFS
and BFS traversal of
state-space



Can model real-world
primitives, e.g. tri-state
buses, high impedence
value
No explicit unrolling of
time-frames
Uses circuit-based
heuristics to guide or
speed-up search
Sequential ATPG for BMC
PROPERTY


n
Recent work
stuck-at
by Abraham
fault
TEST
ORIGINAL
MONITOR
NETWORK
CIRCUIT
et al (ITC’02,
VLSID’03)
Model both
p
safety &
p
p
p
start
p
p
liveness
…
n
2
0
1
n-1
properties
Example:
p
(bounded)
p
EG p
p
ATPG-MC: Current status




Attempts to use simulation-based ATPG for
falsification (Hsiao et al HLDVT’01, DAC’02)
No research characterizing fragment of
language (e.g. CTL, LTL) checkable through
ATPG-based MC
Some works report dramatic improvements of
seq. ATPG-BMC vs. SAT-BMC
Current efforts towards combining SAT
solvers and sequential ATPG (SATORI
Section Outline






Symbolic Simulation
Symbolic Trajectory evaluation (STE)
SAT-based Bounded Model Checking
Sequential ATPG-based model checking
Smart Simulation (mixed engine)



Coverage-driven test generation: Magellan
(industrial)
Test amplification: 0-in search (industrial)
Guided Search

State-space search optimized for bugsearching (Yang & Dill, DAC 98)
 Use prioritized search

Cost function to pick next state to search
 Hamming distance to error states
Prioritized Search
Error State
Error State
Improving Guided Search

Target enlargement
also Yuan et al CAV 97

Without
Target
Enlargement
Reset
State
Improved cost function

Tracks
 Compute series of approximate
preimages
 Use the approximate pre-image to
estimate distance to enlarged target

Guideposts
 Designer provided hints/necessary
pre-conditions for assertion violation
 Bias search to explore search with
those pre-conditions
Error
States
Enlarged
Error
States
Section Outline






Symbolic Simulation
Symbolic Trajectory evaluation (STE)
SAT-based Bounded Model Checking
Sequential ATPG-based model checking
Smart Simulation (mixed engine)



Coverage-driven test generation: Magellan
(industrial)
Test amplification: 0-in search (industrial)
Smart Simulation I : SIVA
[Ganai, Aziz, Kuehlmann DAC ‘99]


Combines random simulation with BDDs and
combinational ATPG
Identify targets


Indicator variables, coverage goals, fail states etc.
Lighthouses:
 Several per target
 “Sub-targets” to guide SIVA to reach a target


At each state identify targets with constant
simulation signature
Advance simulation by visiting states which
causes such targets to toggle value

Search for such next state by ATPG/BDDs
SIVA - a run
R1’
R1
R5000’
INITIAL
R2
RANDOM
R5000
BDD
D1’
ATPG
D1
D2
Key: Determination of appropriate targets,
lighthouses etc. to guide the search
Smart Simulation II: Magellan
[Ho et al, ICCAD 2000 (Synopsys)]

Approach: Coverage-driven test generation



Test generation


Identify few interesting signals (typically <64):
coverage signals
Goal: Maximize state-coverage on coverage
signals
Interleave random simulation with symbolic
simulation & SAT-BMC
Unreachability analysis (under-approx)

Identify unreachable coverage states to reduce
coverage target
Magellan Methodology
Initial
state
Symbolic
simulation /
SAT-BMC

Random
simulation
Coverage
state
Random simulation: Deep narrow search
 SAT-BMC: Short-range exhaustive (wide) search (<10 steps)
 Symbolic simulation: Middle range exhaustive (wide) search
(10-50 steps)
Smart Simulation III: 0-in Search

Approach: Test amplification
 Identify “seed states” reached through
simulation i.e. directed or random simulation
 Explore exhaustively behavior around seed
states using formal methods

SAT-BMC, BDDs, symbolic simulation
+
Formal
=
Simulation
0-In Search
Methodology: Assertion-based Verification

Use checkers (written in HDL) to:

Specify targets for simulation/formal methods
 Capturing buggy conditions

Specify environment of a module
 Can be synthesized into constraints


Specify interfaces between components
Inserting checkers:



From 0-in CheckerWare library (~60 checkers)
Specified by the user in Verilog
Created automatically from RTL directives
 By 0-in Check functional tool

Suggested by 0-in Checklist while analyzing RTL
 A kind of sophisticated lint checking
0-in Search: Tool Flow
Instrumented
Verilog RTL
Verilog Tests
and Testbench
Verilog Files with
0-in Checkers
Standard Verilog
Simulator
Source: 0-in Design
Automation Verification
White-paper
No firings in
Verilog output
Simulation
Trace (Seed)
Checker
Control File
0-in Search
0-in Confirm
Verilog simulator
(Firing Replay)
Firings Log
0-in View GUI
Standard
Waveform Tool
Tutorial Outline

Introduction

High level design flow and verification issues

Theorem proving and Decision procedures

Formal verification techniques for FSM

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
Simulation Based Verification
Fujitsu Labs. of America
Sunnyvale, California
Section Outline

Introduction



Making simulation effective




why simulation
types of simulation
Coverage analysis
Test bench automation
RTL ATPG
SOC verification challenges
 Case study
Verification Using Simulation
Test Cases
/ Features
Test Sequences
Implementation
to be Verified
Spec
Expected
Responses
Output Responses
=?
Simulation and Abstraction Levels

Specification level simulation


Behavioral simulation



possible to simulate just C/C++/SpecC/SystemC program
checking with spec requires cutting out relevant portions of
behavior
RTL simulation




particular behavior or properties can be captured in UML like
languages and simulated. e.g. protocols
most widely used in industry
many commercial tools
checking with behavior requires time checkpointing
Logic simulation


fairly straight forward and easy to check with RTL
detailed timing simulation comes into picture





simple concept
scales well with design size compared to formal or semiformal methods
most popular strategy in industry


simulation semantics may not mimic hardware
cannot be exhaustive
 cannot prove correctness



writing extensive test benches is tedious
do not know when to stop
still can be resource intensive and slow
Static RTL checks : Linting

Design rule based


Fix non-synthesizeable code


force uniform and correct style among all designers
Verilog syntax can compile but create problems in
synthesis and simulation




no point simulating stuff that cannot be synthesized
Synthesis results vary based on coding styles


coding, style, documentation, naming, custom
comparing different length variables if(a[2:0] == b[1:0] )
using blocking/non-blocking assignments at wrong places
latch interfaces etc.
Standard set of rules are based on Reuse
Methodology Manual (RMM) (reference page 2)

LEDA from Synopsys

nLint from Novas Software

integrated with the Debussy debugging system

HDLLint from Veritools

VN-check from TransEDA

Surelint from Verisity

Spyglass from Atrenta
Tackling Simulation Drawbacks


Simulation semantics may not mimic hardware
Cannot be exhaustive




cannot prove correctness
Writing extensive test benches is tedious
Do not know when to stop
Still can be resource intensive and slow
RTL Simulation Semantics

Ddelay model





4 valued logic is usually used (0, 1, X, Z)


more complex logic possible e.g. std_logic_vector in VHDL
Blocking and Non-blocking assignments using syntax



each operation takes negligible time
operations are scheduled in terms of clock boundaries
explicitly specified delay may be used
functional simulation may be different from later stage timing
simulation
blocking assignments are sequentially executed
non blocking assignments are executed in parallel at end of a
clock boundary
Value Change Dump (VCD) file captures changes in
values of variables with time
Event-driven Simulation

Event: change in logic value at a node, at
a certain instant of time  (V,T)
 Event-driven: only considers active nodes


Performs both timing and functional
verification



Efficient
All nodes are visible
Glitches are detected
Most heavily used and well-suited for all
types of designs
Event-driven Simulation
1
0
a
D=2
1
c
Events:
•Input: b(1)=1
•Output: none
1
0
b
0
1
Event: change in logic value, at a certain instant of time 
(V,T)
1
0
a
D=2
1
b
0
1
c
1
0
3
Events:
•Input: b(1)=1
•Output: c(3)=0
Event-driven Simulation

Uses a timewheel to manage the
relationship between components

Timewheel = list of all events not
processed yet, sorted in time (complete
ordering)

When event is generated, it is put in the
appropriate point in the timewheel to
ensure causality
Event-driven Simulation
1
1
0
0
a
c
D=2
1
1
1
e
D=1
b
0
0
3
4
6
1
d
0
5
e(4)=0
b(1)=1
d(5)=1
d(5)=1
c(3)=0
d(5)=1
d(5)=1
e(6)=1
Cycle-based Simulation

Take advantage of the fact that most digital
designs are largely synchronous
L
a
t
c
h
e
s

Boundary Node
Internal Node
L
a
t
c
h
e
s
Synchronous circuit: state elements change value on
active edge of clock
 Only boundary nodes are evaluated
Cycle-based Simulation

circuit


at each clock cycle
at each boundary node
L
a
t
c
h
e
s
Internal Node
L
a
t
c
h
e
s
Cycle-based vs Event-driven

Cycle-based:





Only boundary
nodes
No delay
information

Event-driven:


Each internal node
Need scheduling and
functions may be
evaluated multiple
times
At RTL if timing information is unknown Cycle-based is more
appropriate
Cycle-based is 10x-100x faster than event-driven (and less
memory usage)
Cycle-based does not detect glitches and setup/hold time
violations, while event-driven does

Event-driven simulators

Synopsys VCS, Sirocco
 VCS is a fast Verilog simulator
 Sirocco is a VHDL simulator



Model Tech. Modelsim (VHDL, Verilog, mixed)
Cycle-based simulators



Quickturn SpeedSim (VHDL)
Synopsys PureSpeed (Verilog)
Debugging Tools

VCD dump viewer

clk
a
b
bus[0:3]
time

xxxx
0
10
5H
20
3H
30
40
50
Integrated debugging environment




more sophisticated tool
cross references code, VCD waveform viewer, coverage
analysis etc.
integrates software debug features and hardware debug
features
e.g. Novas Software Debussy
Tackling Simulation Drawbacks


Simulation semantics may not mimic hardware
Cannot be exhaustive




cannot prove correctness
Writing extensive test benches is tedious
Do not know when to stop
Still can be resource intensive and slow
Coverage Analysis




Quantitative measure of simulation effectiveness
Determines if the simulation test bench is exercising
the design effectively
Concepts borrowed from software testing
Various metrics available
Coverage
Analysis Tool
HDL
Design
Display in
GUI
Gather Coverage
Data
Instrumented
Design
Test Bench
RTL
Simulation
Common Coverage Metrics
* Statement Coverage
- see if all statements are covered by testbench
Example
if ( a > 0)
e = c * d;
else
f = c + d;
Statement Coverage: 66.7%
* Branch Coverage
- see if both sides are taken in all possible branches
* Toggle Coverage
- see if each variable bit value has toggled
if (b > 0)
e = e + 1;
if (b <= 0) did not occur
Branch Coverage: 50%
Verification engineers should
aim for 100% statement, branch and toggle coverage
Condition Coverage
* See if all possible conditions in an expression are taken
Example:
if ((a < 0) && (b > 3) && (c = 5))
{ .....
}
* All eight possibilities are to be present for 100% condition coverage
- e.g. {(a>=0), (b<=3), (c != 5)} ; {(a >=0), (b<=3), (c=5)} etc.
* All don’t care possibilities may not be feasible
- use only care combinations
Focussed Expression Coverage
* Only try possibilities that make each clause important
Thus try {(true, true, true); (false, true, true); (true, false, true); (true, true, false)}
If some combinations are still not possible then expression in redundant!
Verification engineers should aim for > 95% FEC
Path Coverage
State <= S
State <= S
no Done == ‘1’ yes
no Done == ‘1’ yes
State = Q
no
Isb == ‘1’
yes
State = R
State <= S
no Done == ‘1’ yes
State = Q
no
Isb == ‘1’
yes
State = R
State <= S
no Done == ‘1’ yes
State = Q
no
Isb == ‘1’
yes
State = Q
no
Isb == ‘1’
State = R
* Path Coverage: What % of paths are covered?
- exponential number of paths
- difficult to do the bookkeeping
- impossible to get good coverage
- false paths may hinder coverage
- supported to a limited extent in most tools
Mostly ignored by verification engineerers
yes
State = R
FSM Coverage

If there are FSM descriptions in the machine



see that all states of all FSMs are visited
all possible state transitions in every FSM is taken
FSMs need to be described in particular manner



automatic extraction fails mostly
if other coverage metrics are good, this is already taken
care of
else this should at least be 100%
A
C
B
D
Cover
- 4 states
- 7 transitions


Work in conjunction with RTL simulators
Synopsys Covermeter


Verisity Surecov


Verilog coverage analysis tool integrated with Specman
Elite tool
TransEDA Verification Navigator



Verilog coverage integrated with VCS simulator
supports both VHDL and Verilog
works with all major simulators

works with their NCSim RTL simulators (VHDL and
Verilog)
Coverage Analysis - Pros & Cons

Pros:





provides some quantitative analysis of verification effort
increases confidence level in simulation based verification
points out untested functionality in the design
gives some type of closure to verification effort
Cons:




cannot model many types of errors
high confidence level due to good coverage numbers can give
false sense of security
observability of error is ignored
still covering implementation, not specification
Importance of Observability

Example:
input a, b, c, d;
f = 1;
e = 0;
if ( a > 0)
error
e = c * d;
else
error
f = c + d;
if (b > 0)
print(e);
else
print(f);
simulate:
a = 1, b = -1;
a = 0, b = 1;
100% statement and
branch coverage, but
no errors detected
Observability Enhanced Code Coverage Analysis

Does vector set propagate an erroneous value on variable a to an
observable output?
a = 1;
erroneous value
c = 4 - a;
printf(“%d”, c);
•
•
•
•
This metric known as OCCOM is based on Fallah et.al’s work at DAC 98.
Possibility of an error represented by tagging variable in left-hand side of
an assignment or an expression by +D or -D .
A Tag at a location/variable represents the possibility that an
incorrect value is present at the location/variable
Goal: see if Tag injected at a location is propagated to output by test set
Tag Propagation

Positive, Negative and Unsigned tag

Single error assumption
A  1;
D
D
C  4  A;
printf( "C  %d",C
D
D
);
Error Model

Errors in design modeled as errors in assignment
statements and expression clauses



Method confirms that a design error is detected if the
vector set activates and propagates the required tag
on the location of the error




only positive and negative errors tackled to reduce complexity
sometimes unkown tags (D') generated
error may propagate even if tag does not
tag propagation is conservative
if +D and -D collide then usually (D') tag is killed after
sometime but error can still be observable
Error model is sufficient but not necessary for error
detection
Research Results
* How good is the tag coverage metric for tracking bugs?
Intelligent Vectors
Bug Coverage
Tag Coverage
Random Vectors
Bug Coverage
Tag Coverage
- 30 typical bugs were introduced into a Fujitsu RTL circuit
- functional simulation done using intelligent vectors that target tag coverage
and random vectors
Conclusion: Tag coverage closely tracks bug coverage but is slightly pessimistic.
Specification Coverage using CWL
CWL (Component Wrapper Language)

Jointly developed by Hitachi and Fujitsu
 Hierarchical description aimed at
abstraction level conversion
 Support for split transactions
 Support for interface specific hardware
Hierarchical Description: Base
clk
rst
en
a
wait
dt[7:0]
d
I
signalset all =
N
:
I
:
Q(a):
W
:
S(d):
endsignalset
N
Q(a)
W
W
S(d)
N
{ R, 1, 1, x, 1, x};
{ R, 0, x, x, 1, x};
{ R, 1, 0, a, 1, x};
{ R, 1, 1, x, 0, x};
{ R, 1, 1, x, 1, d};
Hierarchical Description: Transaction
clk
rst
en
a
wait
dt[7:0]
d
I
N
Q(a)
reset nop
W
W
S(d)
N
nop
sentence
word;
nop
: N ;
reset
: I ;
read(a,d) : Q(a) W* S(d) ;
endword
sentence;
reset [ nop | read ]+ ;
endsentence
Specification Coverage using Transitions

Once specification sentence protocol generated





create complete regular expression
create FSM that recognizes regular expression
convert FSM to Verilog monitor
during simulation see that all transitions in FSM are
covered
Then protocol specification completely verified


ensures coverage of all possible legal signal
combinations
not just the interesting scenarios
Tackling Simulation Drawbacks


Simulation semantics may not mimic hardware
Cannot be exhaustive




cannot prove correctness
Writing extensive test benches is tedious
Do not know when to stop
Still can be resource intensive and slow
Test Bench Automation

Today’s ASICs and SOCs


multi million gates, extremely complex
1000s of inputs and outputs

Writing meaningful input patterns in 1s and 0s
impossible
 Impossible to verify output pattern in 1s and 0s
 Need specialized language and framework
 to write good, complex and extensive test bench
 Example: Ethernet frame

code (72 bytes to 10000 bytes)

Write function Create_frame(parameters)
 creates correct frame in terms of patterns of 1s and 0s
Key Features

Constrained input sequence generation

write input sequence in high level construct
 may use structures, fields etc.





Automatic Output Checker generation





put constraints on input fields
tool converts these to signal level test sequence
tool can also solve for constraints on some inputs
tool can iterate on several parameters to generate numerous
test sequences automatically
create a set of rules to check expected output
again high level structures can be used
e.g. if packet sent to port A from port B earlier that port C,
they are output in sequence in port A [check_packet( )]
Concise property specification languages
Support for constrained random test generation
Bus Functional Models

These HDL models are used to convert higher
level transaction behavior to signal level inputs


test bench can be written and analyzed in high level
once done prevents detailed signal level stimulus
 more robust and less error prone, hides details

Example: memory
write()
Bus
Functional
Model
data[7:0
rw
ale
Pre-defined
vald
protocol with
timing
Property Specification Languages


Is needed for unambiguous modeling of Spec
Should be formal yet easy to read/understand



support for real life properties - Boolean, temporal etc
PSL Sugar from IBM standardized by Accellera


it has 4 layers of formalism
Boolean layer
 describes states of design

Temporal layer
 describes behavior of design over time

Modeling layer
 model auxiliary state variables and state machines and input
environment

Verification layer
 directives to verification tool assume/assert/cover etc.
Writing Properties in Sugar

Example

{true[*];req;ack} |=> {start;busy[*];end}
true
req
ack
start
busy
end
if
then
if
then
Temporal Operators


always (req -> next ack)


when request is asserted ack must be asserted next cycle
using forall operator






always (req && data_in == ‘b000 -> next data_out == ‘b000)
always (req && data_in == ‘b001 -> next data_out == ‘b001)
..
can be written as
forall i in 0..7
always (req && data_in == i -> next data_out == i)
Assertion Checking

Example


assert if request signal is high for 5 cycles without the
acknowledge signal going high, then system should assert
busy flag
Sugar property
vunit check_busy_flag {
assert
{ [*], {request & !acknowledge} [*5] } |-> {busy flag}
}


If this checker is written in Verilog


Automatic translators exist now


50 lines of HDL code
e.g. FOCS from IBM
Verification efficiency is increased a lot
CAD Tools for Test Bench Automation
& Assertion Checking

Verisity Specman Elite




uses its own OVL (Open Vera Language) (now open)
Avery VCK


uses C++ classes to automate tests
Synopsys Vera


uses proprietary E language (free licenses available)
uses VLE (Verilog language enhancements)
As Accellera Sugar has become industry
standard all tools are now supporting it
Automatic Test Pattern Generation at RTL

Generating good RTL simulation test benches is
hard and tedious

still manual process
 test bench automation helps somewhat but still need to write
tests in high-level test language like Sugar etc.


may not exercise design completely (bugs may remain)
Alternative: attempt to generate RTL test benches
automatically by analyzing the RTL design (ATPG)


only HDL design is needed
approach is bottom up as opposed to top down
 may not be suitable in all cases

Requires RTL error models


currently stuck-at fault model and OCCOM is used
experimentally determined they correlate well with bugs
Application Scenarios
Test
Bench
Test Generation
Algorithm/Tool
Executable
Specification
RTL Design
Under Test
SCENARIO 1
near 100%
coverage
Test
Responses
=?
Test
Bench
Test Generation
Algorithm/Tool
Optimized
RTL model
Golden RTL
Model
near 100%
coverage
SCENARIO 2
* Formal RTL equivalence checking
fails as circuits are too dissimilar
Test
Responses
Test
Responses
=?
Test
Responses
Data Structure Used


Represent RTL netlist as an
Assignment Decision
Previously proposed for highlevel synthesis by Chaiyakul
et.al. (DAC 93)
Case state is
when st3 =>
if (a < 7)
r = p + q;
else
r = p - q;
Example
a
7
=
<
Assignment
Condition
st3
state
&
q
-
+
v
c
!
c
&
•
p
Assignment
Value
v
r
– Represents a structural view suitable for ATPG
Assignment
Target
Algorithm Overview

Convert HDL file into a series of ADDs


each process converted to an ADD

Infer RTL structural components from ADDs
 Each inferred component is fed its logic-level stuck-at
test
PI
PI






2-to-1 mux
: 4 vectors
register / latch
: 4 / 2 vectors
logic gate
: well known test set
logic
memories
: checker board test
<
arithmetic module : precomputed test
 an universal test set preferable
random logic / test set unavailable
PO
 excite HDL code and observe effect at system primary output
Modeling RTL Values




Cg
C0
C1
Ca1
:
:
:
:
control an n-bit bus to any of 2n value
control a variable to the 0 value (00..00)
control a variable to the 1 value (00..01)
control a variable to the all ones vector (11..11)
 CA1 of 1 bit variable is C1

Cq
Cs
Cz
Cp
O

O’




:
:
:
:
:
control a variable to any constant (5, 10, etc.)
control a state variable to a particular state value (St1)
control to high-impedance state (zz..zzz)
control to a particular range of integers (16-32)
observe an any fault on multi-bit variable or an 1/0
fault on a single-bit variable
: observe a 0/1 fault only for a single-bit variable
* These constants are sufficient to propagate test vectors across all RTL modules
Algorithm
Cg
Cg
backtrack
To PIs
C1
Cg
C1
Cg
+
To PIs
Cg
Cq
C1
O
*
<
O
C1
O
To POs
Results & Ongoing Research

This is in technology transfer phase


Has been applied on industrial examples
Prototype shows huge speed up over logic ATPG

2 to 3 orders of magnitude reduction in CPU time

Test sets automatically give close to 100% line, branch
and condition coverage
 Can be used as initial filter before manual test benches
or formal verification


application scenarios limited
still block level tool, about 20,000-50,000 line flat HDL
 search is NP complete and uses branch and bound

Ongoing research


speed up engine using circuit SAT techniques
modify engine for RTL property checking
Tackling Simulation Drawbacks


Simulation semantics may not mimic hardware
Cannot be exhaustive




cannot prove correctness
Writing extensive test benches is tedious
Do not know when to stop
Still can be resource intensive and slow
Stopping Criteria

Coverage based


Bug rate based



when rate of new bugs detected fallen below particular
level, e.g. - 2 per week
Random simulation based


e.g. >99.5% line coverage, >99% branch coverage, >90%
condition coverage
e.g. random simulation ran for 3 days without problem
Some weighted average of all the above
Use case emulation

e.g. video processor
 watch output of emulator in real time on a TV screen


need to tape out chip, hence stop simulation !!
Tackling Simulation Drawbacks


Simulation semantics may not mimic hardware
Cannot be exhaustive




cannot prove correctness
Writing extensive test benches is tedious
Do not know when to stop
Still can be resource intensive and slow
Tackling the Time Complexity

Still huge amounts of time and resources
required to simulate multi-million gate designs


but only simulation scales to large designs
Alternatives


Hardware emulation/acceleration
Core based design of SOC
 use preverified modules and just verify interface
 some peculiar problems arise

Correct by construction design
Emulation

PLDs
 Directly simulate test bench on the hardware
using a computer interface
 Capture output responses in a memory buffer and
display using the computer



10X to 100X faster than software simulation
simulation semantics problems are lower



systems are expensive
cumbersome to set up
debugging support weaker than software debuggers
Tackling the Time Complexity

Still huge amounts of time and resources
required to simulate multi-million gate designs


but only simulation scales to large designs
Alternatives


Hardware emulation/acceleration
Core based design of SOC
 use preverified modules and just verify interface
 some peculiar problems arise

Correct by construction design
SOC Verification Issues

What is new in SOC verification?


SOCs - large designs, small teams





Why is SOC verification more challenging?
closer to computer systems than ASICs
but built rapidly with small design teams using cores
Verification team cannot get deep understanding
of target design
Not enough resources to develop verification
tools specific to design
Verification teams must rely on existing tools and
technology

generic knowledge of domain with small adaptations
More Challenges

Most SOCs contain a processor



Processors can be used to stimulate rest of the system but..
How to treat system software



big and complex and programmable
ignoring means not testing entire system
leaving it in means more difficult to use processor to test other
components
HW/SW co-simulation a major issue



simulating processor will slow down RTL simulation
hardware and software operate at different rates
modeling solutions are required
 e.g. Instruction Set Architecture

Interface issues

how to check that the cores are talking correctly with each other
e.g. protocol verification
Solution - Raise Abstraction Level


Focus at level where design complexity lies



improved productivity due to reasoning at right level
early start of verification effort
 start on high-level models of design
 but use same methodology for lower level

New building blocks


signals, packets, complex transactions
cores instead of registers, FSMs etc.



Transaction based verification and coverage
System level test generators
New applications of formal verification

protocol verification
 E.g. - Product by Jasper Design Automation
High Level Checking Techniques

Behavioral rules can be checked to test aspects
of behavior


e.g. : transaction ordering, coherence
Need means to describe rules and check them

e.g. : when write transaction begins all previous read
transactions have finished
 look for all read transactions and check their status
 check order with respect to write transaction

Data flow is a good source of behavioral rules


record the history of each transaction
analyze the the behavior of system according to the flow
of transactions and their interaction
Using System Level Test Generators

Goal: Systematic verification of IP (core)
interaction to verify global functional behavior



Generates tests that cover the interaction among
IPs
Tool Esterel Studio


assumes each IP has been individually verified
based on hierarchical concurrent finite state machines
(HFSMs)
Four step process




model system as HFSM (need lot of abstraction)
create symbolic tests
transform tests to concrete tests
simulate the concrete tests
Creating Symbolic Tests

Use BDD based traversal engine to compute all
the possible paths to the test_completed state in
the global test scheduling FSM



each path is associated with an input sequence
Transform input sequences to output sequences
using the Esterel Studio simulator
The resulting sequences are commands to and by
the IPs that create the requested scenario
config_data_channel_1
req_dst_wav req_src
write_frame
Tackling the Time Complexity

Still huge amounts of time and resources
required to simulate multi-million gate designs


but only simulation scales to large designs
Alternatives


Hardware emulation
Core based design of SOC
 use preverified modules and just verify interface
 some peculiar problems arise

Correct by construction design
Correct By Construction

Translation from higher to lower level description



Easier said than done



automatic and correct
no need to verify lower level
CAD tools will contain bugs (software program)
requires years of use before some degree of confidence
Example



Synopsys Design Compiler
RTL to gate level generator
after 12 years of use and numerous bug fixes most
designers trust its output in terms of functionality
 timing still not completely robust

Platform based design

another design flow with this philosophy
Case Study: Fujitsu 10GB Ethernet Switch

Fujitsu 10 Gb Ethernet Switch Chip

Layer 2 Switch
 12 I/O XAUI ports (IEEE 802.3ae)
 MAC frame relay and spanning tree (802.1D)
 Virtual LAN (802.1Q)

Design Complexity



6.3 million gates & 900 KBytes SRAM
approximately 70 million transistors
process 0.11m
Our mission is to verify the
Switch Chip
Processor
(BPDU, GMRP, GVRP,
Management)
Switch Chip
(Filtering, Relay)
Verification Framework
Test Program-1 (.cc)
Test Program-2 (.cc)
APIs
Frame Generator
(.cc)
XAUI
BFM
(.v)
MAC
XAUI
XAUI
Driver /
XGMII
BFM
(.v)
Switch Chip
(DUV)
XGMII
Driver /
Reference
Model (.cc)
Interrupt Handler /
Device Driver (.cc)
Processor
BFM (.v)
EEPROM
BFM (.v)
Verilog-HDL
Driver (.cc)
TestBuilder
Black Box Verification


Just see if design conforms to Spec.
Spec. - IEEE 802.3ae, 802.1D, 802.1Q


over 1000 pages of English text
IEEE Specifications conformance test


Micro-architecture independent
PICS (Protocol Implementation Conformance Statement)
 somebody painfully went through the text to extract about
200 points to test (this is well known)
 extracted 39 test cases (one test case can cover multiple
tests)

Example: frame address is learnt by switch
 send frame with source address A from port 2
 send frame with destination address A from port 3
 check that second frame comes out at port 2
White Box Verification

Test the special features of the chip not
mentioned in IEEE Spec



implementation dependent
verification engineer need to understand design
functionality
Micro-architecture dependent test cases

E.g. 1. memory corruption test
 memory protected by error correcting code
 intentionally corrupt memory during frame transmission
 see that error correction is occurring correctly


E.g. 2. mode registers: extracted 88 test cases
Total dedicated test cases - 200

simulation cycles - 25.9 million
IP Verification

Design used one 3rd party soft IP
 IP came with own testbench
 Manually translated IP test bench to
system test bench for regression tests


verified line coverage of IP > 99.5% even within
system
5 bugs were found in the IP
Random Simulation

Random test


Arbitrary random frames generation
Reference model for automatic verification
 this is tricky as bugs may be in reference model
 may be tedious to write

Best used to hit assertions put in by
designer


the random seeds needs to be stored for each
test case so that failed test may be reproduced
Random test
 #cycles: 1B (2M frames)
Coverage Analysis

At this point


line coverage > 95%
branch coverage > 90%

If not rethink verification strategy
 Otherwise





find uncovered code
ask designer what he intended to do in that piece of
write test case to test functionality
careful not to repeat same bug
Final Coverage



line - 99.3%
branch - 99.5%
condition - 94.5%
Code Size






10 Gb Switch Chip (Hierarchical RTL
Verilog): 75k lines
Bus Functional Models (Verilog) : 6k lines
Transaction Verification Model (C++)
with random test
: 10k lines
Test Programs (C++)
: 48k lines
Total RTL Verilog - 75K lines
Total Verification Testbench - 64K lines
(almost same as design)
Bug Rate
18
16
14
12
10
8
6
4
2
0
Block Level
System Level
week
300
250
200
150
271
Total
Block Level
System Level
157
114
100
50
0
week
Start Unit Level
Verification
Start System Level
Verification
(early functional)
Start System Level
Verification
(full functional)
First Code
Review
Logic
Freeze
Use Case Simulation

Final step



Get a few typical applications running on 2 to 3 servers
these applications should need to exchange data between
them frequently over the LAN
obtain data traffic trace over the LAN assuming they are
connected by this switch

Simulate/Emulate with this traffic and the applications

Check applications are running correctly

This step is more involved than software simulation

requires testing board, emulator, memory dump, accelerator
etc.
Lessons Learnt

Verification needs to start early in design cycle

even with partial designs

Designers are responsible for block-level verification

Dedicated verification team for system-level verification

Equivalence check after scan insertion

Coverage analysis necessary but not sufficient

Hardware/Software co-simulation necessary

Only function verification not enough

Timing verification & complete DRC check equally important
Tutorial Outline

Introduction

High level design flow and verification issues

Theorem proving and Decision procedures

Formal verification techniques for FSM models

Semi-formal verification techniques

Simulation based verification techniques

Conclusions and future directions
Recap & Conclusion
Fujitsu Labs. of America
Sunnyvale, California
Verification Importance

It is obvious that we need to verify a design before
fabrication
 Many famous bugs




Pentium floating point
Arian 5 rocket blow up
Mars polar lander crash lands
Design productivity gap



technology available to put 1 billion transistors on chip
inability of designers to design that kind of chip
verification bottleneck
 70% of chip design effort is spent on verification

Robust and scalable solutions are required


verification effort is reduced
Various Types of Verification

Higher order theorem proving and decision procedures




mostly manual effort and cumbersome
requires deep understanding of algorithm
still mostly in research phase with limited industrial application
Formal Verification




has become industrial practice in last few years
combinational equivalence checking very successful
sequential equivalence checking needs work
model checking automatic but does not scale well
 still block level (10,000 gates 1,000 FFs)

some techniques to reduce state space are practical
 may increase frontier by 4-5 times
Various Types of Verification

Semiformal Verification


recently SAT solvers have become extremely efficient
bounded model checking is being used
 handles large designs but can go to about 1,000 cycles deep

Simulation Based




most popular in industry (mature technology)
scalable but requires huge amounts of CPU and memory
full chip simulation may not be practical in few years
coverage is an issue and cannot prove correctness
Verification Directions

Driven by design flow

higher levels of abstraction
 formal specification languages
 mapping verification problems to lower level engines

design reuse
 interface verification
 system level verification

Tackling larger designs in formal verification



efficient formal engines
sequential equivalence checkers
filter based techniques
 using multiple engines

hybrid engines
 e.g. BDD/SAT/ATPG combination

higher level reasoning
 need to automate
Current Industry Practices

Most designs still start at RTL




RTL simulation well understood and used
getting more and more difficult to do full chip simulation
Model checking tools gaining popularity at block level


writing meaningful set of properties difficult
designers need to change mindset
 property specification languages becoming popular


automatic property extraction cannot go very far
many new CAD tool offerings from EDA vendors
 functionality still evolving



Equivalence checking mainstream for certain cases
Emulation used for real-time systems
Functional verification not end of story

timing verification and DRC check very important

Model Checking tools


Semi-formal verification tools



Jasper Design Automation
Hardware acceleration


0-in
Synopsys Magellan
Interface/Protocol Verification tools


@HDL, Real Intent, Verplex
Tharus Systems
Coverage analysis and debugging tools


Novas Debussy
Sugar based tools from TransEDA, Verisity
Research Issues

Design and Verification at high levels of abstraction


new modeling issues
Eg: what if Spec is in UML
 how to generate problem, map problem

Faster engines

more efficient BDDs
 efficient and automatic abstraction-refinement

faster SAT solvers
 better learning and pruning techniques

combining engines
 ATPG/SAT combination
 BDD/SAT combination



use of higher level information to guide search
methodology issues
Interface specification and verification

automatic static checks
Concluding Remarks

Verification is a fundamental problem for everyone



It is becoming a design bottleneck in hardware




still nascent technology
Remains an exciting filed of research
Verification will evolve as design methodology
evolves


path breaking inventions are required
serious industrial and academic efforts are required
Newer products will emerge in the formal domain


not only hardware but software, mechanical, electrical
everywhere
will not be irrelevant until correct-by-construction from
English is possible (looks impossible so far)
No single approach will suffice
Thank You
Bibliography
Books














B. Berard et. al., “Systems and Software Verification: Model Checking Techniques and
Tools”, Springer Verlag, 2001.
J. Bergeron, “Writing Testbenches: Functional Verification of HDL Models”, Kluwer
D.D. Gajski et. al., “SpecC: Specification Language and Methodology”, Kluwer
J. Bhasker, “ A SystemC Primer ”, Star Galaxy Press, Allentown, 2002.
M. Suart and D. Dempster, “ Verification Methodology Manual for Code Coverage in
HDL Designs”, Teamwork International, Hampshire, UK, 2000.
K. L. McMillan, Symbolic Model Checking'', Kluwer Academic Publishers, 1993.
Z. Manna and A. Pnueli, “Temporal Specification and Verification of Reactive
Systems” Vol. I and II, Springer 1995.
Ching-Tsun Chou, "The Mathematical Foundation of Symbolic Trajectory Evaluation",
Springer-Verlag 1999.
Thomas Kropf: "Introduction to Formal Hardware Verification", (Springer Verlag;
ISBN: 3540654453, 299 pages, January 2000)
E. M. Clarke, O. Grumberg and D. Peled, "Model Checking", (MIT Press; ISBN:
0262032708; 330 pages; January 2000)
G. J. Holzmann, “Design and Validation of Computer Protocols”, Prentice Hall, 1991.
M. P. Fourman, “Formal System Design”, Formal Methods for VLSI Design, IFIP, 1990,
North-Holland.
C. Meinel and T. Theobald, “Algorithms and Data Structures in VLSI Design”,
Springer-Verlag, 1998.
Hassan Gomaa, “Designing Concurrent, Distributed and Real-Time Applications with
Books













L. Bening and H. Foster, “Principles of Verifiable RTL Design: Functional Coding Style
R. P. Kurshan, “Computer Aided Verification of Coordinating Processes”, Princeton University
Press, 1994.
Robert B. Jones, “Symbolic Simulation Methods for Industrial Formal Verification”, Kluwer
M. Yoeli, "Formal Verification of Hardware Design", IEEE Computer Society Press, 1991. (Book
containing a collection of papers)
R. P. Kurshan, “Computer Aided Verification of Coordinating Processes”, Princeton University
Press, 1994.
G. J. Holzmann, “Design and Validation of Computer Protocols”, Prentice Hall, 1991.
M. P. Fourman, “Formal System Design”, Formal Methods for VLSI Design, IFIP, 1990, NorthHolland.
C. Meinel and T. Theobald, “Algorithms and Data Structures in VLSI Design”, Springer-Verlag,
1998.
M. Kaufmann, P. Manolios, and J S. Moore, "Computer-Aided Reasoning: An Approach",
(Kluwer Academic Publishers, June 2000; ISBN 0792377443)
Robert B. Jones, Symbolic Simulation Methods for Industrial Formal Verification, Kluwer
M. Yoeli, "Formal Verification of Hardware Design", IEEE Computer Society Press, 1991. (Book
containing a collection of papers)
M. Kaufmann, P. Manolios, and J S. Moore, "Computer-Aided Reasoning: An Approach",
(Kluwer Academic Publishers, June 2000; ISBN 0792377443)
M. Keating and P. Bricaud “Reuse Methodology Manual for System-On-A-Chip Designs,” Kluwer
Papers
High Level Design and Verification Modeling






D. D. Gajski, IP-Based Design Methodology'', Proc. of the 36th Design Automation
Conference, pp. 43, New Orleans, June 1999.
D. Kroening and E. Clarke, “Behavioral consistency of C and Verilog programs using
bounded model checking,” in Proc. DAC, June 2003.
T. Sakunkonchak and M. Fujita, “Verification of synchronization in SpecC description with
the use of difference decision diagram,” IEICE Trans. Fundamentals, Vol. E85-A, Jan 2002.
T. Matsumoto and M. Fujita, “Equivalence checking of C-based hardware descriptions by
using symbolic simulation and program slicer,” in Proc. IWLS, May 2003.
T. Ball and S. Rajamani, “Checking Temporal Properties of Software with Boolean
Programs”, in Proc. Computer Aided Verification 2000, [Microsoft SLAM project]
R. Bryant et . al. “Processor Verification Using Efficient Reductions of the Logic of
Uninterpreted Functions to Propositional Logic” in ACM Trans. on Computational Logic,
Vol 2., 2001.

S. Singh, “Design and Verification of CoreConnectTM IP using Esterel,” in Proc CHARME
2003

S. Sutherland, “System Verilog 3.1 Its what the DAVEs in you company asked for”. in Proc.
DVCON 2003

S. Abdi, D. Shin and D. Gajski, "Automatic Communication Refinement for System Level
Design,” Proceedings of Design Automation Conference, Anaheim, CA, June 2003.

W. Mueller, R. Dömer, A. Gerstlauer, "The Formal Execution Semantics of SpecC,"
in Proceedings of International Symposium on System Synthesis, October 2002.
Papers
Theorem Proving & Decision Procedures










S. Owre and S. Rajan and J.M. Rushby and N. Shankar and M.K. Srivas, “PVS:
Combining Specification, Proof Checking, and Model Checking”, 411-414, CAV96.
S. Owre and S. Rajan and J.M. Rushby and N. Shankar and M.K. Srivas, “PVS:
Combining Specification, Proof Checking, and Model Checking”, 411-414, CAV96.
D. Cyrluk, S. Rajan, N. Shankar and M. K. Srivas, “Effective Theorem Proving for
Hardware Verification”, pp. 287-305, TPCD94.
S. J. Garland and J. V. Guttag, “An Overview of LP: the Larch Prover”, Proceedings of
the Third International Conference on Rewriting Techniques and Applications, 1989,
Springer-Verlag.
J. Staunstrup and M. Greenstreet, “Synchronized Transitions”, Formal Methods for
VLSI Design, 1990, IFIP, North-Holland.
R. Vemuri, “How to Prove the Completeness of a Set of Register Level Design
Transformations”, Proceedings of the 27th ACM/IEEE Design Automation Conference,
1990, 207—212.
D. Cyrluk, S. Rajan, N. Shankar and M. K. Srivas, “Effective Theorem Proving for
Hardware Verification”, pp. 287-305, TPCD94.
S. J. Garland and J. V. Guttag, “An Overview of LP: the Larch Prover”, Proceedings of
the Third International Conference on Rewriting Techniques and Applications, 1989,
Springer-Verlag.
J. Staunstrup and M. Greenstreet, “Synchronized Transitions”, Formal Methods for
VLSI Design, 1990, IFIP, North-Holland.
C. M. Angelo, “Formal Hardware Verification ina Silicon Compilation Environment by
means of Theorem Proving”, Ph.D. Dissertation, Katholieke Universiteit Leuven
(February, 1994).
Papers









R. Vemuri, “How to Prove the Completeness of a Set of Register Level Design
Transformations”, Proceedings of the 27th ACM/IEEE Design Automation Conference,
1990, 207—212.
Jeffrey J. Joyce and Carl-Johan H. Seger, “Linking Bdd Based Symbolic Evaluation to
Interactive Theorem Proving”, Proceedings of the 30th Design Automation
Conference, 1993.
S. P. Rajan, N. Shankar and M. Srivas, “An Integration of Model-Checking with
Automated Proof Checking”, 7th Conference on Computer-Aided Verification, July,
1995.
M.D. Aagard, R.B. Jones and C.-J.H. Seger, “Combining Theorem Proving and
Trajectory Evaluation in an Industrial Environment”, Proceedings of DAC 1998, pp.
538-541
C.W. Barrett, D.L. Dill and J.R. Levitt, “A Decision Procedure for Bit-Vector
Arithmetic”, Proceedings of DAC 1998
J.R. Burch and D.L. Dill, “Automatic Verification of Pipelined Microprocessor
Control”, Proceedings of CAV 1994
J.X. Su, D.L. Dill and C.W. Barrett, “Automatic Generation of Invariants in Processor
M. Aagaard, M. E. Leeser, and P. J. Windley, “Toward a Super Duper Hardware Tactic”,
in Higher Order Logic Theorem Proving and its Applications: 6th International
Workshop, HUG'93, Vancouver, B.C., August 11-13 1993: Proceedings, edited by J. J.
Joyce and C.-J. H. Seger, Lecture Notes in Computer Science, vol. 780 (SpringerVerlag, 1994), pp. 399-412.
F. Andersen, K. D. Petersen, and J. S. Pettersson, A Graphical Tool for Proving UNITY
Progress', in Higher Order Logic Theorem Proving and Its Applications: 7th
International Workshop, Valletta, Malta, September 1994: Proceedings, edited by T. F.
Melham and J. Camilleri, Lecture Notes in Computer Science, Volume 859 (SpringerVerlag, 1994), pp. 17-32.
Papers








S. Agerholm, Domain Theory in HOL', in Higher Order Logic Theorem Proving and its
Applications: 6th International Workshop, HUG'93, Vancouver, B.C., August 11-13
1993: Proceedings, edited by J. J. Joyce and C.-J. H. Seger, Lecture Notes in
Computer Science, vol. 780 (Springer-Verlag, 1994), pp. 295-310.
S. Agerholm, Formalising a Model of the lambda-calculus in HOL-ST' Technical
Report Number 354, University of Cambridge Computer Laboratory (1994).
S. Agerholm, “A HOL Basis for Reasoning about Functional Programs”, Ph.D.
Dissertation, BRICS Technical Report RS-94-44, Department of Computer Science,
University of Aarhus (December 1994).
S. Agerholm, “LCF Examples in HOL”, The Computer Journal, vol. 38, no. 2 (July
1995), pp. 121-130.
S. Agerholm, Mechanizing Program Verification in HOL', in Proceedings of the 1991
International Workshop on the HOL Theorem Proving System and its Applications,
Davis, August 1991, edited by M. Archer, J. J. Joyce, K. N. Levitt, and P. J. Windley
(IEEE Computer Society Press, 1992), pp. 208-222.
S. Agerholm, Mechanizing Program Verification in HOL', M.Sc. Dissertation, DAIMI
Report Number IR-111, Department of Computer Science, University of Aarhus (April
1992).
S. Agerholm “Non-Primitive Recursion Function Definition”, in Higher Order Logic
Theorem Proving and Its Applications: 8th International Workshop, Aspen Grove,
Utah, September 1995: Proceedings, edited by E. T. Schubert, P. J. Windley, and J.
Alves-Foss, Lecture Notes in Computer Science, Volume 971 (Springer-Verlag, 1995),
pp. 17-31.
F. Andersen and K. D. Petersen, Recursive Boolean Functions in HOL', in
Proceedings of the 1991 International Workshop on the HOL Theorem Proving
System and its Applications, Davis, August 1991, edited by M. Archer, J. J. Joyce, K.
N. Levitt, and P. J. Windley (IEEE Computer Society Press, 1992), pp. 367-377.
Papers







C. M. Angelo, L. Claesen, and H. De Man, “Degrees of Formality in Shallow Embedding
Hardware Description Languages in HOL”, in Higher Order Logic Theorem Proving and
its Applications: 6th International Workshop, HUG'93, Vancouver, B.C., August 11-13
1993: Proceedings, edited by J. J. Joyce and C.-J. H. Seger, Lecture Notes in Computer
Science, vol. 780 (Springer-Verlag, 1994), pp. 89-100.
C. M. Angelo, L. Claesen, and H. De Man, The Formal Semantics Definition of a MultiRate DSP Specification Language in HOL', in Higher Order Logic Theorem Proving and
its Applications: Proceedings of the IFIP TC10/WG10.2 International Workshop, Leuven,
September 1992, edited by L. J. M. Claesen and M. J. C. Gordon, IFIP Transactions A-20
(North-Holland, 1993), pp. 375-394.
C. M. Angelo, L. Claesen, and H. De Man, “Modeling Multi-rate DSP Specifiction
Design, vol. 5, nos. 1/2 (July 1994), pp. 61-94.
R. J. R. Back and J. von Wright, Predicate Transformers and Higher Order Logic', in
Semantics: Foundations and Applications: REX Workshop, Beekbergen, June 1992,
edited by J. W. de Bakker, W.-P. de Roever, and G. Rozenberg, Lecture Notes in
Computer Science, Volume 666 (Springer-Verlag, 1993), pp. 1-20.
R. J. R. Back and J. von Wright, Refinement Concepts Formalised in Higher Order
Logic', Formal Aspects of Computing, Vol. 2, No. 3 (July-September 1990), pp. 247-272.
S. Bainbridge, A. Camilleri, and R. Fleming, Industrial Application of Theorem Proving
to System Level Design', in Proceedings of the 1991 International Workshop on the
HOL Theorem Proving System and its Applications, Davis, August 1991, edited by M.
Archer, J. J. Joyce, K. N. Levitt, and P. J. Windley (IEEE Computer Society Press, 1992),
pp. 130-142.
F. Andersen, K. D. Petersen, and J. S. Pettersson, Program Verification using HOLUNITY', in Higher Order Logic Theorem Proving and its Applications: 6th International
Workshop, HUG'93, Vancouver, B.C., August 11-13 1993: Proceedings, edited by J. J.
Joyce and C.-J. H. Seger, Lecture Notes in Computer Science, vol. 780 (SpringerVerlag, 1994), pp. 1-15.
Papers






R. H. Beers and P. J. Windley, Abstracting Signals: The waveform Library', in
Supplementary Proceedings of the 9th International Conference on Theorem Proving
in Higher Order LogicsL TPHOLs ’96, edited by J. von Wright, J. Grundy, and J.
Harrison, TUCS General Publication No 1, Turku Centre for Computer Science
(August, 1996), pp. 1-13.
G. Birtwistle and B. Graham, Verifying SECD in HOL', in Formal Methods for VLSI
Design: IFIP WG 10.2 Lecture Notes, edited by J. Staunstrup (North-Holland, 1990),
pp. 129-177.
G. Birtwistle, B. Graham, and S.-K. Chin, “New theory HOL: An Introduction to
Hardware Verification in Higher Order Logic”, (August 1994). [Published
electronically.]
R. Boulton, Boyer-Moore Automation for the HOL System', in Higher Order Logic
Theorem Proving and its Applications: Proceedings of the IFIP TC10/WG10.2
International Workshop, Leuven, September 1992, edited by L. J. M. Claesen and M. J.
C. Gordon, IFIP Transactions A-20 (North-Holland, 1993), pp. 133-142.
R. Boulton, “Combining Decision Procedures in the HOL System”, in Higher Order
Logic Theorem Proving and Its Applications: 8th International Workshop, Aspen
Grove, Utah, September 1995: Proceedings, edited by E. T. Schubert, P. J. Windley,
and J. Alves-Foss, Lecture Notes in Computer Science, Volume 971 (Springer-Verlag,
1995), pp. 75-89.
R. J. Boulton, “Efficiency in a Fully-Expansive Theorem Prover”, Ph.D. Dissertation,
Technical Report Number 337, University of Cambridge Computer Laboratory (May
1994).
Papers
Formal Verification: SAT Solvers




J. Marques-Silva and K. Sakallah, “GRASP: A Search Algorithm for Propositional
Satisfiability”, IEEE Transactions on Computers, 48(5):506-521, May 1999
M. Sheeran and G. Stalmarck, “A tutorial on Stalmarck’s proof procedure for
propositional logic”, Formal Methods in System Design, 16(1):23-58, January 2000
M.H. Moskewicz et. al., “Chaff: engineering an efficient SAT solver,” in Proc. Design
Automation Conf., June 2001.
E. Goldberg and Y. Novikov, “BerkMin: a Fast and Robust Sat-Solver”, Proceedings of
Design Automation and Test in Europe, pp.142-149, March 2002.
Formal Verification: BDDs






R. E. Bryant, “Graph Based Algorithms for Boolean Function Manipulation”, IEEE
Transactions on Computers, Vol. C-35-8, pp. 677- 691, August 1986
R. Rudell, “Dynamic Variable Ordering for Ordered Binary Decision Diagrams”,
Proceedings of ICCAD 1993, pp. 42- 45
R. E. Bryant, “On the Complexity of VLSI Implementations and Graph Representations
of Boolean Functions with Application to Integer Multiplication”, IEEE Transactions on
Computers, Vol. 40, No. 2, pp, 205-213, February 1991
B. Bollig and I. Wegener, “Improving the variable ordering for OBDDs is NP-complete”,
IEEE Transactions on Computers, Vol. 45, No. 9, pp. 993-1002, September 1996
M. Fujita, H. Fujitsawa and Y. Matsunaga, “Variable Ordering Algorithms for Ordered
Binary Decision Diagrams and their Evaluation”, IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, Vol. 12, No. 1, pp. 6- 12, January
1993
R. Bryant, “Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams”,
slides from a presentation by Prof. Bryant in August 1999
Papers
Formal Verification: Symbolic Model Checking (BDDs)










D. L. Dill, “The Mur Verification System”, 390-393, CAV96.
R. Bryant, E. Clarke, K. McMillan and A. Emerson, “Binary Decision Diagrams and
Symbolic Model Checking”, slides from a presentation at Symposium on Algorithms
in the Real World, May, 2000
E. M. Clarke, E. A. Emerson and A. P. Sistla, Automatic Verification of Finite-State
Concurrent Systems Using Temporal Logic Specifications'', ACM Trans. on
Programming Language and Systems, Vol.8, No.2, pp 244-263, April 1986.
J. Burch, E. Clarke, D. Long, K. McMillan, D. Dill, “Symbolic Model Checking for
Sequential Circuit Verification”, IEEE Trans. Computer Aided Design, 13, 1994, 401424.
C. Kern and M. Greenstreet, "Formal Verification in Hardware Design: A Survey", ACM
Transactions on Design Automation of E. Systems, Vol. 4, April 1999, pp. 123-193.
H. Iwashita and T. Nakata,  Forward Model Checking Techniques Oriented to Buggy
Designs'', Proceedings of ICCAD, pp. 400-404, 1997.
K. Takayama, T. Satoh, T. Nakata, and F. Hirose, An approach to Verify a Large Scale
System-on-a-chip Using Symbolic Model Checking'', Proceedings of ICCD, 1998.
A. J. Hu, “Formal Hardware Verification with BDDs : An Introduction”, ACM
Transactions on Programming Languages and Systems, 1997.
M. Kaufmann, A. Martin, and C. Pixley, "Design Constraints in Symbolic Model
Checking", Proc. CAV-98, pp. 477-487, 1998.
K. L. McMillan, "Fitting formal methods into the design cycle", Proceedings of the 31st
Design Automation Conference, pp. 314-19, 1994
Papers
Formal Verification with SAT solvers







P. Abdulla, P. Bjesse and N. Een, “Symbolic Reachability Analysis Based on SAT
Solvers”, Proceedings of Tools and Algorithms for the Construction and Analysis of
Systems (TACAS), pp. 41-425, March 2000
P. Williams et al, “Combining Decision Diagrams and SAT Procedures for Efficient
Symbolic Model Checking”, Proc. of the 12th CAV, pp. 124-138, July 2000
M. Sheeran, S. Singh and G. Stalmarck, “Checking Safety Properties Using Induction
and a SAT Solver”, Proc. of the 3rd Intl. Conference on Formal Methods in CAD
P. Bjesse and K. Claessen, “SAT-based Verification without State Space Traversal”,
Proc. of the 3rd FMCAD, pp. 372-389, Nov. 2000
K.L. McMillan, “Applying SAT Methods in Unbounded Model Checking”, Proc. of the
14th CAV, pp. 250-264, July 2002
K.L. McMillan, “Interpolation and SAT-based Model Checking”, Proc. of the 15th CAV,
July 2003
I. Park and H. Kang, “SAT-based unbounded symbolic model checking”,
Proceedings of DAC, pp. 840-843, June 2003
Papers
Equivalence Checking








D. Brand, “Verification of Large Synthesized Designs”, Proceedings of ICCAD,
November 1993, pp. 534-537
A. Kuehlmann and F. Krohm, “Equivalence Checking using Cuts and Heaps”,
Proceedings of DAC 1997, pp. 263-268
Y. Matsunaga, “An Efficient Equivalence Checker for Combinational Circuits”,
Proceedings of DAC 1996, pp. 629-634
R. Mukherjee, J. Jain, K. Takayama, M. Fujita, J. Abraham and D. Fussell, “An
Efficient Filter Based Approach for Combinational Verification”, in IEEE Transactions
H. Cho and C. Pixley, “Apparatus and Method for deriving correspondences between
storage elements of a first circuit model and storage elements of a second circuit
model”, U.S. Patent 5,638,381, June 1997
K. Ng, M. Prasad, R. Mukherjee and J. Jain, “Solving the Latch Mapping Problem in
an Industrial Setting”, Proceedings of DAC 2003, pp. 442-447
C.A.J. van Eijk, “Sequential Equivalence Checking Based on Structural Similarities”,
IEEE Transactions on CAD, 19(7):814-819, July 2000
S.-Y. Huang, K.-T. Cheng, K.-C. Chen and F. Brewer, “AQUILA: An Equivalence
Checking System for Large Sequential Circuits”, IEEE Transactions on Computers,
49(5), May 2000
Papers
Semi-Formal Verification










R. E. Bryant, “Symbolic Simulation -- Techniques and Applications”, Proceedings of
DAC 1990
R.B. Jones, “Applications of Symbolic Simulation to the Formal Verification of
Microprocessors”, Ph.D. Thesis, Computer Systems Laboratory, Stanford
University, August 1999
R.E. Bryant and C.J.H. Seger, “Formal hardware verification by symbolic trajectory
evaluation,” in Proc. Design Automation Conf., June 1991.
C.-J.H. Seger and R.E. Bryant, “Formal Verification by Symbolic Evaluation of
Partially Ordered Trajectories”, Formal Methods in System Design, Vol. 6, No. 2, pp.
147-190, 1995
E. Clarke, A Biere, R. Raimi and Y. Zhu, “Bounded Model Checking Using
Satisfiability Solving”, in Formal Methods in System Design, 19(1): 7-34, July 2001,
O. Shtrichman, “Tuning SAT Checkers for bounded model checking,” in Proc. Int.
Conf. on Computer-Aided Verification, July 2000.
F. Fallah, “Binary Time-Frame Expansion”, Proc. of ICCAD, pp. 458-464, Nov. 2002
A. Gupta et al, “Learning from BDDs in SAT-based bounded model checking,” in
Proc. Design Automation Conf., June 2003.
V. Boppana, S. Rajan, K. Takayama and M. Fujita, “Model Checking Based on
Sequential ATPG”, Proceedings of CAV, pp. 418-430, July 1999.
J. Abraham, V. Vedula and D. Saab, “Verifying Properties Using Sequential ATPG”,
Proceedings of International Test Conference, October 2002
Papers







D. Dill and S. Tasiran, "Simulation meets Formal Verification?", slides from a
D. Dill, "Alternative Approaches to Formal Verification (Symbolic Simulation)", slides
from a presentation at CAV 1999.
C. H. Yang and D. L. Dill, “Validation with Guided Search of the State Space”,
Proceedings of DAC 1998
M. Ganai et al, “SIVA: A System for Coverage Directed State Space Search”, In
Journal of Electronic Testing: Theory and Applications (JETTA), February 2001
P.-H. Ho et al, “Smart Simulation Using Collaborative Formal and Simulation
Engines”, Proceedings of ICCAD, pp. 120-126, November 2000
Y. Hoskote, T. Kam, P. Ho and X. Zhao, Coverage Estimation for Symbolic Model
Checking'', Proc. of the 36th Design Automation Conference, New Orleans, June
1999.
S. Katz, O. Grumberg and D. Geist, Have I written enough properties? - A method of
comparison between specification and implementation'', Technical Report, IBM Haifa
Research Laboratory, Haifa, Israel, 1999.
Model Checking in Practice

S. Berezin, S. Campos and E. M. Clarke, Compositional Reasoning in Model
Checking'', Technical Report - CMU-CS-98-106, School of Computer Science,
Carnegie Mellon University, February, 1998.
Papers

K. L. McMillan, "A compositional rule for hardware design refinement", Computer
Aided Verification (CAV97), O. Grumberg (Ed.), Haifa, Israel, pp. 24-35, 1997.

T.A.Henzinger, S. Qadeer, and S.K.Rajamani, "You assume, We guarantee :
Methodology and Case Studies" CAV98: Computer Aided Verification, Lecture Notes in
Computer Science, Springer-Verlag, pp. 440-451, 1998.

G. Mosensoson, “Practical Approaches to SoC Verification”, DATE 2000.

J. L. Nielsen, H. R. Andersen, G. Behrmann, H. Hulgaard, K. Kristoffersen and K. G.
Larsen, “Verification of Large State/Event Systems using Compositionality and
Dependency Analysis”, Proceedings of TACAS 1998, LNCS 1384, April 1998.

D. Geist, G. Biran, T. Arons, M. Slavkin, Y. Nustov, M. Farkas, and K. Holtz, “A
Methodology for the verification of a System on Chip”, Proc. Of DAC 1999, pp. 574-579.

A. Evans, A. Silburt, G. Vrckovnik, T. Brown, M. Dufresne, G. Hall, T. Ho and Y. Liu,
“Functional Verification of Large ASICs”, Proc. Of DAC, 1998.

S. Taylor, M. Quinn, D. Brown, N. Dohm, S. Hildebrandt, J. Higgins and C. Ramey,
“Functional Verification of a Multiple-issue, Out-of-Order, Superscalar Alpha Processor
– the DEC Alpha 21264 Microprocessor”, Proc. Of DAC, 1998.

T. Henzinger, X. Liu, S. Qadeer and S. Rajamani, “Formal Specification and Verification
of a Dataflow Processor Array,”Proceedings of ICCAD 1999

D. Wang, “SAT based Abstraction Refinement for Hardware Verification”, Ph.D. thesis,
ECE Dept., Carnegie Mellon University, May 2003

Shankar G. Govindaraju, David L. Dill, Alan J. Hu, and Mark A. Horowitz. "Approximate
Reachability with BDDs using Overlapping Projections,” in Proc. DAC 1998

Gaurishankar Govindaraju, "Approximate Symbolic Model Checking using
Overlapping Projections" Ph.D. thesis, Stanford University, August 2000.
Papers
Simulation Based Verification










I. Ghosh and M. Fujita, Automatic test pattern generation for functional registertransferlevel circuits using assignment decision diagrams,'' IEEE. Trans. on ComputerAided Design, Vol. 20, No. 3, pp. 402-415, March 2001.
F. Fallah, S. Devadas, and K. Keutzer, OCCOM: Efficient computation of observabilitybased code coverage metrics for functional verification,'' IEEE. Trans. on ComputerAided Design, Vol. 20, No. 8, pp. 1003-1015, Aug. 1998.
F. Fallah, S. Devadas, and K. Keutzer, Functional vector generation for HDL models
using linear programming and 3-satisfiability,”{\em IEEE. Trans. on Computer-Aided
Design}, Vol. 20, No. 8, pp. 994-1002, Aug. 1998.
S. Ravi, G. Lakshminarayana, and N.K. Jha, TAO: Regular expression based high-level
testability analysis and optimization,'' in Int. Test Conf., pp. 331-340, Oct. 1998.
I. Ghosh and S. Ravi, “On automatic generation of RTL validation test benches using
circuit testing techniques,” in Proc. Great Lakes Symposium on VLSI, April 2003.
L. Zhang, I. Ghosh and M. Hsiao, “Efficient sequential ATPG for functional RTL circuits,”
in Proc. International Test Conference, Oct. 2003.
L. Zhang, I. Ghosh and M. Hsiao, “Automatic design validation framework for HDL
descriptions via RTL ATPG,” in Proc. Asian Test Symposium, Nov. 2003
F. Fallah, I. Ghosh, and M. Fujita, “Event driven observability enhanced coverage analysis
of C programs for functional validation,” in {\em Proc. Asia and South Pacific Design
Automation Conference}, Jan. 2003.
K. Ara and K. Suzuki, “A proposal for transaction level verification with component
wrapper language,” Proc. Design Automation and Test in Europe, March 2003.
A. U. Shankar, "An Introduction to Assertional Reasoning for Concurrent Systems", ACM
Computing Surveys, Sept. 1993, Vol 25, No. 3, pp. 225-262
Papers

S. Ramesh and P. Bhaduri, “Validation of Pipelined Processor Designs using Esterel
Tools: A Case Study”, Proc. of CAV '99, LNCS Vol. 1633, 1999.

C. Eisner et. al. “The Temporal Logic Sugar” in Proc. Int Conf. on Computer-aided
Verification 2001.
Miscellaneous





J. Jain, W. Adams and M .Fujita, “Sampling Schemes for Computing OBDD Variable
Orderings”, Proceedings of ICCAD 1998, pp. 331-338
J. Jain, R. Mukherjee and M. Fujita, “Advanced Verification Technique Based on
Learning”, Proceedings of DAC 1995, pp. 420-426
C. H. Yang and D. L. Dill, “Validation with Guided Search of the State Space”,
Proceedings of DAC 1998
A. Jain, “Formal Hardware Verification by Symbolic Trajectory Evaluation”, Ph.D.
Thesis, Dept. of Electrical and Computer Engineering, Carnegie Mellon University,
August 1997
C.-J.H. Seger and R.E. Bryant, “Formal Verification by Symbolic Evaluation of
Partially Ordered Trajectories”, Formal Methods in System Design, Vol. 6, No. 2, pp.
147-190, 1995
Important web-sites
http://www.comlab.ox.ac.uk/archive/formal-methods.html
http://www.csl.sri.com
http://godel.ece.utexas.edu/texas97-benchmarks/
http://citeseer.nj.nec.com/
http://www.rational.com/uml (Universal Modelling Language HOME-PAGE)
http://www-sop.inria.fr/meije/verification/esterel
http://www.accellera.org
http://www.systemC.org
http://www.specC.org
Conference Proceedings

Computer Aided Verification (CAV)

Formal Methods in Computer Aided Design (FMCAD)

International Conference on Computer-Aided Design (ICCAD)

International Conference on Computer Design (ICCD)

Design Automation Conference (DAC)

Asia South Pacific Design Automation Conference (ASPDAC)

International Conference on VLSI Design (VLSI)

Advanced Research Working Conference on Correct Hardware
Design and Verification Methods (CHARME)
Journals/Magazines

IEEE Design and Test of Computers

IEEE Transactions on Computer-Aided Design of Integrated
Circuits and Systems

IEEE Transactions on Computers

IEEE Transactions on VLSI Systems

ACM Transactions on Design Automation of ELectronic
Systems

Formal Methods in System Design

Formal Aspects of Computing
`