CSE 497A
Spring 2002
Functional Verification
Lecture 2/3
Vijaykrishnan Narayanan
CSE497A Lecture 2.1
© Vijay, PSU, 2000
Course Administration
 Instructor
Vijay Narayanan
([email protected])
229 Pond Lab
Office Hours: T 10:00-11:00; W 1:00-2:15
 Tool Support Jooheung Lee ([email protected])
 TA
TBA
 Laboratory
101 Pond Lab
 Materials
www.cse.psu.edu/~vijay/verify/
 Texts
» J. Bergeron. Writing Testbenches: Functional Verification of HDL
Models. Kluwer Academic Publishers.
» Class notes - on the web
CSE497A Lecture 2.2
© Vijay, PSU, 2000
Grading
 Grade breakdown
» Midterm Exam:
20%
» Final Exam:
25%
» Verification Projects (~4):
40%
» Homework (~3)
15%
 No late homeworks/project reports will be accepted
 Grades will be posted on course home page
» Written/email request for changes to grades
» April 25 deadline to correct scores
CSE497A Lecture 2.3
© Vijay, PSU, 2000
Secret of Verification
(Verification Mindset)
CSE497A Lecture 2.4
© Vijay, PSU, 2000
The Art of Verification

Two simple questions
Am I driving all possible input
scenarios?
How will I know when it fails?
CSE497A Lecture 2.5
© Vijay, PSU, 2000
Three Simulation Commandments
Thou shalt stress
thine logic harder
than it will ever be
stressed again
Thou shalt place
checking upon all
things
CSE497A Lecture 2.6
Thou shalt not move
onto a higher platform
until the bug rate has
dropped off
© Vijay, PSU, 2000
General Simulation Environment
Testcase
C/C++
HDL Testbenches
Specman e
Synopsis' VERA
Testcase
Driver
Compiler
(not always required)
Environment
Data
Simulator
Output
Event simulator
Cycle simulator
Emulator
Initialization
Run-time requirements
Design
Source
VHDL
Verilog
Testcase results
Model
Event Simulation compiler
Cycle simulation compiler
....
Emulator Compiler
CSE497A Lecture 2.7
© Vijay, PSU, 2000
Run
Foreground
Simulation
Run
Background
Simulation
Configure
Environment
Release
Environment
Debug Fail
Debug
Environment
Logic Designer
Environment Developer
View Trace
Verification
Engineer
Monitor
Batch
Simulation
Specify
Batch
Simulation
Transfer
Testcase
Answer
Defect
Redirect
Defect
Create
Defect
Release
Model
Model Builder
Regress
Fails
Define
Project
CSE497AGoals
Lecture
2.8
Verify
Defect Fix
Project
Status
Report
© Vijay, PSU, 2000
Project Manager
Some lingo
Facilities: a general term for named wires (or
signals) and latches. Facilities feed gates
(and/or/nand/nor/invert, etc) which feed other
facilities.
 EDA: Engineering Design Automation--Tool
vendors.

CSE497A Lecture 2.9
© Vijay, PSU, 2000
More lingo
Behavioral: Code written to perform the function of
logic on the interface of the design-under-test
 Macro: 1. A behavioral 2. A piece of logic
 Driver: Code written to manipulate the inputs of the
design-under-test. The driver understands the
interface protocols.
 Checker: Code written to verify the outputs of the
design-under-test. A checker may have some
knowledge of what the driver has done. A check
must also verify interface protocol compliance.

CSE497A Lecture 2.10
© Vijay, PSU, 2000
Still more lingo
Snoop/Monitor: Code that watches interfaces or
internal signals to help the checkers perform
correctly. Also used to help drivers be more devious.
 Architecture: Design criteria as seen by the
customer. The design's architecture is specified in
documents (e.g. POPS, Book 4, Infiniband, etc), and
the design must be compliant with this specification.
 Microarchitecture: The design's implementation.
Microarchitecture refers to the constructs that are
used in the design, such as pipelines, caches, etc.
 Escape: An error that appears in test floor escaping
verification
CSE497A Lecture 2.11
© Vijay, PSU, 2000

Typical Verification diagram
Struct:
Header
Payload
checking
Checking framework
Scoreboard
xlate
predict
DUT
(bridge chip)
Bus
gen packet
drive packet
post packet
Coverage Data
Stimulus
types
latency
address
sequences
Device
FSMs
conditions
transactions
transitions
CSE497A Lecture 2.12
Conversation
Sequence
Packet
Protocol
© Vijay, PSU, 2000
Verification Cycle
Create Testplan
Develop environment
Debug hardware
Escape Analysis
Regression
Hardware debug
Fabrication
CSE497A Lecture 2.13
© Vijay, PSU, 2000
Verification Testplan

Team leaders work with design leaders to create a
verification testplan. The testplan includes:
Schedule
Specific tests and methods by simulation level
Required tools
Input criteria
Completion criteria
What is expected to be found with each test/level
What's not covered by each test/level
CSE497A Lecture 2.14
© Vijay, PSU, 2000
Verification is a process used to
demonstrate the functional
correctness of a design. Also
called logic verification or
simulation.
CSE497A Lecture 2.15
© Vijay, PSU, 2000
Reconvergence Model
 Conceptual representation of the verification process
 Most important question
– What are you verifying?
Transformation
Verification
CSE497A Lecture 2.16
© Vijay, PSU, 2000
What is a testbench?
 A “testbench” usually refers to the code used to create a predetermined input sequence to a design, then optionally observe the
response.
» Generic term used differently across industry
» Always refers to a testcase
» Most commonly (and appropriately), a testbench refers to code written
(VHDL, Verilog, etc) at the top level of the hierarchy. The testbench is often
simple, but may have some elements of randomness
 Completely closed system
» No inputs or outputs
» effectively a model of the universe as far as the design is concerned.
 Verification challenge:
» What input patterns to supply to the Design Under Verification and what is
expected for the output for a properly working design
CSE497A Lecture 2.17
© Vijay, PSU, 2000
 Show Multiplexer Testbench
CSE497A Lecture 2.18
© Vijay, PSU, 2000
Importance of Verification
 Most books focus on syntax, semantics and RTL subset
» Given the amount of literature on writing synthesizeable code vs..
writing verification testbenches, one would think that the former is a
more daunting task. Experience proves otherwise.
 70% of design effort goes to verification
» Properly staffed design teams have dedicated verification engineers.
» Verification Engineers usually outweigh designers 2-1
 80% of all written code is in the verification environment
CSE497A Lecture 2.19
© Vijay, PSU, 2000
The Line Delete Escape
Escape: A problem that is found on the test
floor and therefore has escaped the
verification process
 The Line Delete escape was a problem on
the H2 machine

S/390 Bipolar, 1991
Escape shows example of how a verification
engineer needs to think
CSE497A Lecture 2.20
© Vijay, PSU, 2000
The Line Delete Escape
(pg 2)

Line Delete is a method of circumventing
bad cells of a large memory array or cache
array
An array mapping allows for removal of
defective cells for usable space
CSE497A Lecture 2.21
© Vijay, PSU, 2000
The Line Delete Escape
(pg 3)
If a line in an array has multiple bad
bits (a single bit usually goes
unnoticed due to ECC-error
correction codes), the line can be
taken "out of service".
In the array pictured, row 05 has a
bad congruence class entry.
05
.
.
.
CSE497A Lecture 2.22
© Vijay, PSU, 2000
The Line Delete Escape
(pg 4)
Data in
ECC Logic
Data enters ECC creation logic prior to storage into
the array. When read out, the ECC logic corrects
single bit errors and tags Uncorrectable Errors
(UEs), and increments a counter corresponding to
the row and congruence class.
05
.
.
.
ECC Logic
Counters
Data out
CSE497A Lecture 2.23
© Vijay, PSU, 2000
The Line Delete Escape
(pg 5)
Data in
ECC Logic
When a preset threshhold of UEs are detected from
a array cell, the service controller is informed that a
line delete operation is needed.
05
.
.
.
ECC Logic
Counters
Data out
Threshhold
Service
Controller
CSE497A Lecture 2.24
© Vijay, PSU, 2000
Data in
The Line Delete Escape
(pg 6)
ECC Logic
Line delete
control
Storage
Controller
configuration
registers
The Service controller can update the configuration
registers, ordering a line delete to occur. When the
configuration registers are written, the line delete
controls are engaged and writes to row 5,
congruence class 'C' cease.
05
.
.
.
ECC Logic
However, because three other cells remain good in
this congruence class, the sole repercussion of the
line delete is a slight decline in performance.
Counters
Data out
Threshhold
Service
Controller
CSE497A Lecture 2.25
© Vijay, PSU, 2000
Data in
The Line Delete Escape
(pg 7)
ECC Logic
How would we test this logic?
Line delete
control
Storage
Controller
configuration
registers
What must occur in the testcase?
What checking must we implement?
05
.
.
.
ECC Logic
Counters
Data out
Threshhold
Service
Controller
CSE497A Lecture 2.26
© Vijay, PSU, 2000
Verification is on
critical path
CSE497A Lecture 2.27
© Vijay, PSU, 2000
Want to minimize
Verification Time!
CSE497A Lecture 2.28
© Vijay, PSU, 2000
Ways to reduce verification time
 Verification can be reduced through:
» Parallelism: Add more resources
» Abstraction: Higher level of abstraction (i.e. C vs..
Assembly)
– Beware though – this means a reduction in control
» Automation: Tools to automate standard
processes
– Requires standard processes
– Not all processes can be automated
CSE497A Lecture 2.29
© Vijay, PSU, 2000
Hierarchical Design
System
Chip
...
Unit
Macro
Allows design team to break system down into
logical and comprehendable components.
Also allows for repeatable components.
CSE497A Lecture 2.30
© Vijay, PSU, 2000
Ways to reduce verification time
 Verification can be reduced through:
» Parallelism: Add more resources
» Abstraction: Higher level of abstraction (i.e. C vs..
Assembly)
– Beware though – this means a reduction in
control/additional training
– Vera, e are examples of verification languages
» Automation: Tools to automate standard
processes
– Requires standard processes
– Not all processes can be automated
CSE497A Lecture 2.33
© Vijay, PSU, 2000
Human Factor in Verification
Process
 An individual (or group
of individuals) must
interpret specification
and transform into
correct function.
Specification
Interpretation
CSE497A Lecture 2.34
RTL Coding
Verification
© Vijay, PSU, 2000
Need for Independent Verification

The verification engineer should not be an
individual who participated in logic design of
the DUT
Blinders: If a designer didn't think of a failing scenario when
creating the logic, how will he/she create a test for that case?
However, a designer should do some verification on his/her
design before exposing it to the verification team

Independent Verification Engineer needs to
understand the intended function and the
interface protocols, but not necessarily the
implementation
CSE497A Lecture 2.35
© Vijay, PSU, 2000
Verification Do's and Don'ts

DO:
Talk to designers about the function and
understand the design first, but then
Try to think of situations the designer might
have missed
Focus on exotic scenarios and situations
– e.g try to fill all queues while the design is
done in a way to avoid any buffer full
conditions
Focus on multiple events at the same time
CSE497A Lecture 2.36
© Vijay, PSU, 2000

Verification Do's and Don'ts
(continued)
Try everything that is not explicitly forbidden
Spend time thinking about all the pieces that you
need to verify
Talk to "other" designers about the signals that
interface to your design-under-test
Don't:
Rely on the designer's word for input/output
specification
Allow RIT Criteria to bend for sake of schedule
CSE497A Lecture 2.37
© Vijay, PSU, 2000
Ways to reduce humanintroduced errors
 Automation
» Take human intervention out of the process
 Poka-Yoka
» Make human intervention fool-proof
 Redundancy
» Have two individuals (or groups) check each
others work
CSE497A Lecture 2.38
© Vijay, PSU, 2000
Automation
 Obvious way to eliminate human-introduced
errors – take the human out.
» Good in concept
» Reality dictates that this is not feasible
– Processes are not defined well enough
– Processes require human ingenuity and creativity
CSE497A Lecture 2.39
© Vijay, PSU, 2000
Poka-Yoka
 Term coined in Total Quality Management
circles
 Means to “mistake-proof” the human
intervention
 Typically the last step in complete automation
 Same pitfalls as automation – verification
remains an art, it does not yield itself to welldefined steps.
CSE497A Lecture 2.40
© Vijay, PSU, 2000
Redundancy
 Duplicate every transformation
» Every transformation made by a human is either:
– Verified by another individual
– Two complete and separate transformations are
performed with each outcome compared to verify that
both produced the same or equivalent result
 Simplest
 Most costly, but still cheaper than redesign
and replacement of a defective product
 Designer should NOT be in charge of
verification!
CSE497A Lecture 2.41
© Vijay, PSU, 2000
What is being verified?
 Choosing a common origin and
reconvergence points determines what is
being verified and what type of method to
use.
 Following types of verification all have
different origin and reconvergence points:
»
»
»
»
Formal Verification
Model Checking
Functional Verification
Testbench Generators
CSE497A Lecture 2.42
© Vijay, PSU, 2000
Formal Verification
 Once the end points of formal verification
reconvergence paths are understood, then
you know exactly what is being verified.
 2 Types of Formal:
» Equivalence
» Model Checking
CSE497A Lecture 2.43
© Vijay, PSU, 2000
Equivalence Checking
 Compares two models to see if equivalence
»
»
»
»
Netlists before and after modifications
Netlist and RTL code (verify synthesis)
RTL and RTL (HDL modificiations)
Post Synthesis Gates to Post PD Gates
– Adding of scan latches, clock tree buffers
 Proves mathematically that the origin and output are
logically equivalent
» Compares boolean and sequential logic functions, not
mapping of the functions to a specific technology
 Why do verification of an automated synthesis tool?
CSE497A Lecture 2.44
© Vijay, PSU, 2000
Equivalence Reconvergence
Model
Synthesis
RTL
Gates
Check
CSE497A Lecture 2.45
© Vijay, PSU, 2000
Model Checking
 Form of formal verification
 Characteristics of a design are formally proven or
disproved
» Find unreachable states of a state machine
» If deadlock conditions will occur
» Example: If ALE will be asserted, either DTACK or ABORT
signal will be asserted
 Looks for generic problems or violations of user
defined rules about the behavior of the design
» Knowing which assertions to prove is the major difficulty
CSE497A Lecture 2.46
© Vijay, PSU, 2000
Steps in Model Checking
 Model the system implementation using a finite state
machine
 The desired behavior as a set of temporal-logic
formulas
 Model checking algorithm scans all possible states
and execution paths in an attempt to find a counterexample to the formulas
 Check these rules
» Prove that all states are reachable
» Prove the absence of deadlocks
 Unlike simulation-based verification, no test cases
are required
CSE497A Lecture 2.47
© Vijay, PSU, 2000
Problems with Model Checking
 Automatic verification becomes hard with increasing
number of states
 10^100 states (larger than number of protons in the
universe) but still does not go far beyond 300 bits of
state variables.
 Absurdly small for millions of transistors in current
microprocessors
 Symbolic model checking explores a larger set of
states concurrently.
 IBM Rulebase (Feb 7) is a symbolic Model Checking
tool
CSE497A Lecture 2.48
© Vijay, PSU, 2000
Model Checking Reconvergence
Model
RTL
Specification
RTL
Interpretation
Model
Assertions Checking
CSE497A Lecture 2.49
© Vijay, PSU, 2000
Functional Verification
 Verifies design intent
» Without, one must trust that the transformation of
a specification to RTL was performed correctly
 Prove presence of bugs, but cannot prove
their absence
CSE497A Lecture 2.50
© Vijay, PSU, 2000
Functional Reconvergence
Model
Specification
RTL
Functional
Verification
CSE497A Lecture 2.51
© Vijay, PSU, 2000
Testbench Generators
 Tool to generate stimulus to exercise code or
expose bugs
 Designer input is still required
 RTL code is the origin and there is no
reconvergence point
 Verification engineer is left to determine if the
testbench applies valid stimulus
 If used with parameters, can control the
generator in order to focus the testbenches
on more specific scenarios
CSE497A Lecture 2.52
© Vijay, PSU, 2000
Testbench Generation
Reconvergence Model
Code Coverage/Proof
RTL
Testbench
Metrics
Testbench
Generation
CSE497A Lecture 2.53
© Vijay, PSU, 2000
Functional Verification
Approaches
 Black-Box Approach
 White-Box Approach
 Grey-Box Approach
CSE497A Lecture 2.54
© Vijay, PSU, 2000
Black-Box
Inp u ts
S o m e p ie ce o f lo g ic
d e sig n w ritte n in
VHDL
O utp uts
• The black box has inputs, outputs, and performs some function.
• The function may be well documented...or not.
• To verify a black box, you need to understand the function and
be able to predict the outputs based on the inputs.
• The black box can be a full system, a chip, a unit of a chip, or a
single macro.
•Can start early
CSE497A Lecture 2.55
© Vijay, PSU, 2000
White-Box
 White box verification means that the internal
facilities are visible and utilized by the
testbench stimulus.
» Quickly setup interesting cases
» Tightly integrated with implementation
» Changes with implementation
 Examples: Unit/Module level verification
CSE497A Lecture 2.56
© Vijay, PSU, 2000
Grey-Box
 Grey box verification means that a limited
number of facilities are utilized in a mostly
black-box environment.
 Example: Most environments! Prediction of
correct results on the interface is occasionally
impossible without viewing an internal signal.
CSE497A Lecture 2.57
© Vijay, PSU, 2000
Perfect Verification
 To fully verify a black-box, you must show that
the logic works correctly for all combinations
of inputs.
This entails:
» Driving all permutations on the input lines
» Checking for proper results in all cases
 Full verification is not practical on large
pieces of designs, but the principles are valid
across all verification.
CSE497A Lecture 2.58
© Vijay, PSU, 2000
Reality Check

Macro verification across an entire system is not
feasible for the business
There may be over 400 macros on a chip, which
would require about 200 verification engineers!
That number of skilled verification engineers does not
exist
The business can't support the development expense

Verification Leaders must make reasonable trade-offs
Concentrate on Unit level
Designer level on riskiest macros
CSE497A Lecture 2.59
© Vijay, PSU, 2000
Typical Bug rates per level
CSE497A Lecture 2.60
© Vijay, PSU, 2000
Cost of Verification
 Necessary Evil
» Always takes too long and costs too much
» Verification does not generate revenue
 Yet indispensable
» To create revenue, design must be functionally
correct and provide benefits to customer
» Proper functional verification demonstrates
trustworthiness of the design
CSE497A Lecture 2.61
© Vijay, PSU, 2000
Verification And Design Reuse
 Won’t use what you don’t trust.
 How to trust it?
» Verify It.
 For reuse, designs must be verified with more
strict requirements
» All claims, possible combinations and uses must
be verified.
» Not just how it is used in a specific environment.
CSE497A Lecture 2.62
© Vijay, PSU, 2000
When is Verification Done?
 Never truly done on complex designs
 Verification can only show presence of errors,
not their absence
 Given enough time, errors will be uncovered
 Question – Is the error likely to be severe
enough to warrant the effort spent to find the
error?
CSE497A Lecture 2.63
© Vijay, PSU, 2000
When is Verification Done?
(Cont)
 Verification is similar to statistical hypothesis.
– Hypothesis – Is the design functionally correct?
CSE497A Lecture 2.64
© Vijay, PSU, 2000
Hypothesis Matrix
Errors
Bad Design
Good Design
No Errors
Type II
(False Positive)
Type I
(False Negative)
CSE497A Lecture 2.65
© Vijay, PSU, 2000
Tape-Out Criteria

Checklist of items that must be completed
before tape-out
Verification items, along with
Physical/Circuit design criteria, etc
Verification criteria is based on
– Function tested
– Bug rates
– Coverage data
– Clean regression
– Time to market
CSE497A Lecture 2.66
© Vijay, PSU, 2000
Verification VS. Test
 Two often confused
 Purpose of test is to verify that the design
was manufactured properly
 Verification is to ensure that the design meets
the functionality intent
CSE497A Lecture 2.67
© Vijay, PSU, 2000
Verification and Test
Reconvergence Model
HW Design
Fabrication
Specification
Silicon
Net list
Verification
Test
CSE497A Lecture 2.68
© Vijay, PSU, 2000
Verification Tools
 Automation improves the efficiency and
reliability of the verification process
 Some tools, such as a simulator, are
essential. Others automate tedious tasks and
increase confidence in the outcome.
 It is not necessary to use all the tools.
CSE497A Lecture 2.71
© Vijay, PSU, 2000
Verification Tools
 Improve efficiency e.g. spell checker
 Improve reliability
 Automate portion of the verification process
 Some tools such as simulators are essential
 Some tools automate the most tedious tasks
and increase the confidence in outcome
» Code coverage tool
» Linting tols
» Help ensure that a Type II mistake does not occur
CSE497A Lecture 2.72
© Vijay, PSU, 2000
Verification Tools
 Linting Tools
 Simulators
 Third Party Models
 Waveform Viewers
 Code Coverage
 Verification Languages (Non-RTL)
 Revision Control
 Issue Tracking
 Metrics
CSE497A Lecture 2.73
© Vijay, PSU, 2000
Linting Tools
 UNIX C utility program
»
»
»
»
Parses a C program
Reports questionable uses
Identifies common mistakes
Makes finding those mistakes quick and easy
 Lint identified problems
» Mismatched types
» Misatched argument in function calls – either
number of or type
CSE497A Lecture 2.74
© Vijay, PSU, 2000
The UNIX C lint program
 Attempts to detect features in C program files that are
likely to be bugs, non-portabe or wasteful
 Checks type usage more strictly than a compiler
 Checks for
»
»
»
»
»
»
»
Unreachable statements
Loops not entered at top
Variables declared but not used
Logical expressions whse value is constant
Functions that return values in some places but not others
Functions called with a varying number or type of args
Functions whose value is not used
CSE497A Lecture 2.75
© Vijay, PSU, 2000
Advantages of Lint Tools
 Know about problems prior to execution
(simulation for VHDL code)
» Checks are entirely static
 Do not require stimulus
 Do not need to know expected output
 Can be used to enforce coding guidelines
and naming conventions
CSE497A Lecture 2.76
© Vijay, PSU, 2000
Pitfalls
 Can only find problems that can be statically
deduced
 Cannot determine if algorithm is correct
 Cannot determine if dataflow is correct
 Are often too paranoid – err of side of caution
– Type I/II??errors – good design but error
reported – filtering output
 Should check and fix problems as you go –
don’t wait till entire model/code is complete
CSE497A Lecture 2.77
© Vijay, PSU, 2000
Linting VHDL source code
 VHDL is strongly typed – does not need
linting as much as Verilog (Can assign bit
vectors of different lenghts to each other)
 An area of common problems is use of
STD_LOGIC
CSE497A Lecture 2.79
© Vijay, PSU, 2000
VHDL Example
Library ieee;
Use ieee.std_logic_1164.all;
Entity my_entity is
port (my_input: in std_logic)
End my_entity
Warning: file x.vhd: Signal “s1”
is multiply defined
Warning: file x.vhd: Signal “sl”
Has no drivers
Architecture sample of my_entity is
signal s1: std_logic;
signal sl: std_logic;
Begin
stat1: s1 <= my_input;
stat2: s1 <= not my_inputs;
End sample;
CSE497A Lecture 2.80
© Vijay, PSU, 2000
Naming Conventions
 Use a naming convention for signals with
multiple drivers
 Multiple driven signals will give warning
messages but with a naming convention can
be ignored
CSE497A Lecture 2.81
© Vijay, PSU, 2000
Cadence VHDL Lint Tool
CSE497A Lecture 2.82
© Vijay, PSU, 2000
HAL Checks
 Some of the classes of errors that the HAL tool
checks for include:
Interface Inconsistency;Unconnected ports; Incorrect number or type of
task/function arguments Incorrect signal assignments to input ports;
Unused or Undriven Variables; Undriven primary output; Unused
task/function/parameters; Event variables that are never triggered; 2State versus 4-State Issues; Conditional expressions that use x/z
incorrectly; Case equality (===) that is treated as equality (==);Incorrect
assignment of x/z values; Expression Inconsistency; Unequal operand
lengths; Real/time values that are used in expressions; Incorrect
rounding/truncation; Case Statement Inconsistency; Case expressions
that contain x or z logic ; Case expressions that are out of range;
Correct use of parallel_case and full_case constructs; Range and Index
Errors; Single-bit memory words; Bit/part selects that are out of range;
Ranged ports that are re-declared
CSE497A Lecture 2.83
© Vijay, PSU, 2000
Code reviews
 Objective: Identify functional and coding style
errors prior to functional verification and
simulation
 Source code is reviewed by one or more
reviewers
 Goal: Identify problems with code that an
automatd tool would not identify
CSE497A Lecture 2.84
© Vijay, PSU, 2000
Simulators
 Simulators are the most common and familiar
verification tool
 Simulation alone is never the goal of an industrial
project
 Simulators attempt to create an artificial universe that
mimics the environment that the real design will see
 Only a approximation of reality
» Digital values n std logic have 9 values
» Reality – signal is a continuous value between GND and Vdd
CSE497A Lecture 2.85
© Vijay, PSU, 2000
Simlators
 Execute a description of the design
 Description limited to well defined language
with precise semantics
 Simulators are not a static tool – require the
user to set up an environment in which the
design will find itself – this setup is often
called testbench
 Provides inputs and monitors results
CSE497A Lecture 2.86
© Vijay, PSU, 2000
Simulators
 Simulation outputs are validated externally
against design intent (specification)
 Two types:
» Event based
» Cycle based
CSE497A Lecture 2.87
© Vijay, PSU, 2000
Event Based Simulators
 Event based simulators are driven based on
events
 An attempt to increase the simulated time per
unit of wall time
 Outputs are a function of inputs
» The outputs change only when the inputs do
» Moves simulation time ahead to the next time at
which something occurs
» The event is the input changing
» This event causes simulator to re-evaluate and
calculate new output
CSE497A Lecture 2.88
© Vijay, PSU, 2000
Cycle Based Simulators
 Simulation is based on clock-cycles not events
» All combinational functions collapsed into a single operation
 Cycle based simulators contain no timing and delay
information
» Assumes entire design meets setup and holdtime for all FF’s
» Timing is usually verified by static timing analyzer
 Can handle only synchronous circuits
» Only ‘event’ is active edge of clock
» All other inputs are aligned with clock (cannot handle
asynchronous events)
» Moore machine state changes whenver clk changes; mealy
machines they also depend on inputs which can change
asynchronously
 Much faster than event based
CSE497A Lecture 2.89
© Vijay, PSU, 2000
Types of Simulators
(con't)

Simulation Farm
Multiple computers are used in parallel for simulation

Acceleration Engines/Emulators
Quickturn, IKOS, AXIS.....
Custom designed for simulation speed (parallelized)
Accel vs. Emulation
–
–
True emulation connects to some real, in-line hardware
Real software eliminates need for special testcase
CSE497A Lecture 2.90
© Vijay, PSU, 2000
Speed compare

Influencing Factors:
Relative Speed of
different Simulators
Hardware Platform
Frequency, Memory, ...
Model content
– Size, Activity, ...
Interaction with
Environment
Model load time
Testpattern
Network utilization
–
Event Simulator
1
Cycle Simulator
20
Event driven cycle
Simulator
50
Acceleration
1000
Emulation
100000
CSE497A Lecture 2.91
© Vijay, PSU, 2000
Speed - What is fast?

Cycle Sim for one processor chip
 1 sec realtime = 6 month

Sim Farm with a few hundred computers
 1 sec realtime = ~ 1 day

Accelerator/Emulator
 1 sec realtime = ~ 1 hour
CSE497A Lecture 2.92
© Vijay, PSU, 2000
Co-Simulation
 Co-simulators are combination of event,
cycle, and other simulators (acceleration,
emulation)
– Both simulators progress along time in lockstep fashion
 Performance is decreased due to inter tool
communication.
 Ambiguities arise during translation from one
simulator to the other.
» Verilog’s 128 possible states to VHDL’s 9
» Analog’s current and voltage into digital’s logic
value and strength.
CSE497A Lecture 2.93
© Vijay, PSU, 2000
Third Party Models
 Many designs use off the shelf parts
 To verify such a design, must obtain a model
to these parts
 Often must get the model from a 3rd party
 Most 3rd party models are provided as
compiled binary models
 Why buy 3rd party models?
» Engineering resources
» Quality (especially in the area of system timing)
CSE497A Lecture 2.94
© Vijay, PSU, 2000
Hardware Modelers
 Are for modeling new hardware. Some hardware may be too
new for models to available
» Example: In 2000 still could not get a model of the Pentium
III
 Sometimes cannot simulate enough of a model in an acceptable
period of time
CSE497A Lecture 2.95
© Vijay, PSU, 2000
Hardware Modelers (cont)
 Hardware modeler features
» Small box that connects to network that contains a real copy
of the physical chip
» Rest of HDL model provides inputs to the chip and obtains
the chips output to return to your model
CSE497A Lecture 2.96
© Vijay, PSU, 2000
Waveform Viewers
 Lets you view transitions on multiple signals
over time
 The most common of verification tools
 Waveform can be saved in a trace file
 In verification
» need to know expected output and whenever the
simulated output is not as expected
– both the signal value and the signal timing
» use the testbench to compare the model output with
the expected
CSE497A Lecture 2.97
© Vijay, PSU, 2000
Coverage

Coverage techniques give feedback on
how much the testcase or driver is
exercising the logic
Coverage makes no claim on proper
checking

All coverage techniques monitor the
design during simulation and collect
information about desired facilities or
relationships between facilities
CSE497A Lecture 2.98
© Vijay, PSU, 2000
Coverage Goals
Measure the "quality" of a set of tests
 Supplement test specifications by pointing to
untested areas
 Help create regression suites
 Provide a stopping criteria for unit testing
 Better understanding of the design

CSE497A Lecture 2.99
© Vijay, PSU, 2000
Coverage Techniques

People use coverage for multiple reasons
Designer wants to know how much of his/her macro
is exercised
Unit/Chip leader wants to know if relationships
between state machine/microarchitectural
components have been exercised
Sim team wants to know if areas of past escapes
are being tested
Program manager wants feedback on overall
quality of verification effort
Sim team can use coverage to tune regression
buckets
CSE497A Lecture 2.100
© Vijay, PSU, 2000
Coverage Techniques

Coverage methods include:
Line-by-line coverage
– Has
each line of VHDL been exercised?
(If/Then/Else, Cases, states, etc)
Microarchitectural cross products
– Allows for multiple cycle relationships
– Coverage models can be large or small
CSE497A Lecture 2.101
© Vijay, PSU, 2000
Code Coverage
 A technique that has been used in software
engineering for years.
 By covering all statements adequately the chances of
a false positive (a bad design tests good) are
reduced.
 Never 100% certain that design under verification is
indeed correct. Code coverage increases
confidence.
 Some tools may use file I/O aspect of language and
others have special features built into the simulator to
report coverage statistics.
CSE497A Lecture 2.102
© Vijay, PSU, 2000
Adding Code Coverage
 If built into simulator - code is automatically
instrumented.
 If not built in - must add code to testbench to do the
checking
CSE497A Lecture 2.103
© Vijay, PSU, 2000
Code Coverage
 Objective is to determine if you have
overlooked exercising some code in the
model
» If you answer yes then must also ask why the code is
present
 Coverage metrics can be generated after
running a testbench
 Metrics measure coverage of
» statements
» possible paths through code
» expressions
CSE497A Lecture 2.104
© Vijay, PSU, 2000
Report Metrics for Code
Coverage
 Statement (block):
» Measures which lines (statements have been
executed) by the verification suite
 Path:
» Measures all possible ways to execute a
sequence of instructions
 Expression Coverage:
» Measures the various ways paths through the
code are executed
CSE497A Lecture 2.105
© Vijay, PSU, 2000
Statements and Blocks
 Statement coverage can also be called block coverage
 The Model Sim simulator can show how many times a
statement was executed
 Also need to insure that executed statements are simulated with
different values
 And there is code that was not meant to be simulated (code
specifically for synthesis for example)
CSE497A Lecture 2.106
© Vijay, PSU, 2000
Path Coverage
 Measures all possible ways you can execute a
sequence of statements
 Example has four possible paths
CSE497A Lecture 2.107
© Vijay, PSU, 2000
Path Coverage Goal
 Desire is to take all possible paths through
code
 It is possible to have 100% statement
coverage but less than 100% path coverage
 Number of possible paths can be very, very
large => keep number of paths as small as
possible
 Obtaining 100% path coverage for a model of
even moderate complexity is very difficult
CSE497A Lecture 2.108
© Vijay, PSU, 2000
Expression Coverage
 A measure of the various ways paths through code
are taken
 Example has 100% statement coverage but only 50%
expression coverage
CSE497A Lecture 2.109
© Vijay, PSU, 2000
100% Code Coverage
 What do 100% path and 100% expression coverage
mean?
» Not much!! Just indicates how thoroughly verification
suite exercises code. Does not indicate the quality of
the verification suite.
» Does not provide an indication about correctness of
code
 Results from coverage can help identify corner cases
not exercised
 Is an additional indicator for completeness of job
» Code coverage value can indicate if job is not
complete
CSE497A Lecture 2.110
© Vijay, PSU, 2000
Functional Coverage
Coverage is based on the functionality of the
design
 Coverage models are specific to a given
design
 Models cover

The inputs and the outputs
Internal states
Scenarios
Parallel properties
Bug Models
CSE497A Lecture 2.111
© Vijay, PSU, 2000
Interdependency-Architectural Level
The Model:
We want to test all dependency types of a resource (register) relating to
all instructions


The attributes
I - Instruction: add, add., sub, sub.,...
R - Register (resource): G1, G2,...
DT - Dependency Type: WW, WR, RW, RR and None
The coverage tasks semantics
A coverage instance is a quadruplet <Ij, Ik, Rl, DT>, where
Instruction Ik follows Instruction Ij, and both share Resource Rl
with Dependency Type DT.
CSE497A Lecture 2.112
© Vijay, PSU, 2000
Interdependency-Architectural Level (2)

Additional semantics
The distance between the instructions is no
more than 5

Restrictions
Not all combinations are valid
Fixed point instructions cannot share FP
registers
CSE497A Lecture 2.113
© Vijay, PSU, 2000
Interdependency-Architectural Level (3)
Size and grouping:
Original size: ~400 x 400 x 100 x 5
Let the Instructions be divided into disjoint
groups I1 ... In
 Let the Resources be divided into disjoint
groups R1 ... Rk

After grouping: ~60 x 60 x 10 x 5 = 180000
CSE497A Lecture 2.114
© Vijay, PSU, 2000
The Coverage Process

Defining the domains of coverage
 Where do we want to measure coverage
 What attributes (variables) to put in the trace

Defining models
 Defining tuples and semantic on the tuples
 Restrictions on legal tasks

Collecting data
 Inserting traces to the database
 Processing the traces to measure coverage

Coverage analysis and feedback
 Monitoring progress and detecting holes
 Refining the coverage models
 Generating regression suites
CSE497A Lecture 2.115
© Vijay, PSU, 2000
Coverage Model Hints
Look for the most complex, error prone part of the application
 Create the coverage models at high level design
Improve the understanding of the design
Automate some of the test plan
 Create the coverage model hierarchically
Start with small simple models
Combine the models to create larger models.
 Before you measure coverage check that your rules are correct on some
sample tests.
 Use the database to "fish" for hard to create conditions.
Try to generalize as much as possible from the data:
– X was never 3 is much more useful than the task (3,5,1,2,2,2,4,5) was
never covered.

CSE497A Lecture 2.116
© Vijay, PSU, 2000
Future Coverage Usage

One area of research is automated coverage
directed feedback
If testcases/drivers can be automatically
tuned to go after more diverse scenarios
based on knowledge about what has been
covered, then bugs can be encountered much
sooner in design cycle
Difficulty lies in the expert system knowing
how to alter the inputs to raise the level of
coverage.
CSE497A Lecture 2.117
© Vijay, PSU, 2000
Verification Languages
 Specific to verification principles
 Deficiencies in RTL languages (Verilog and VHDL)
» Verilog was designed with a focus on describing
low-level hardware structures
– No support for data structures (records, linked lists, etc)
– Not object oriented
» VHDL was designed for large design teams
– Encapsulates all information and communicates strictly
through well-defined interfaces
 These limitations get in the way of efficient
implementation of a verification strategy
CSE497A Lecture 2.118
© Vijay, PSU, 2000
Verification Languages (cont)
 Some examples of verification languages
»
»
»
»
Verisity’s Specman Elite
Synopsys’ Vera
Chronology’s Rave
System C
 Problem is that these are all proprietary,
therefore buying into one will lock one into a
vendor.
CSE497A Lecture 2.119
© Vijay, PSU, 2000
Verification Languages
 Even with a verification language still
» need to plan verification
» design verification strategy and design verification
architecture
» create stimulus
» determine expected response
» compare actual response versus expected response
CSE497A Lecture 2.120
© Vijay, PSU, 2000
Revision Control
 Need to insure that model verified is model
used for implementation
 Managing a HDL-based hardware project is
similar to managing a software project
 Require a source control management
system
 Such systems keep last version of a file and a
history of previous versions along with what
changes are present in each version
CSE497A Lecture 2.121
© Vijay, PSU, 2000
Configuration Management
 Wish to tag (identify) certain versions of a file so
multiple users can keep working
 Different users have different views of project
CSE497A Lecture 2.122
© Vijay, PSU, 2000
File Tags
 Each file tag
has a specific
meaning
CSE497A Lecture 2.123
© Vijay, PSU, 2000
Issue Tracking
 It is normal and expected to find functional
irregularities in complex systems
 Worry if you don’t!!! Bugs will be found!!!
 An issue is anything that can affect the functionality of
the design
»
»
»
»
Bugs during execution of the testbench
Ambiguities or incompleteness of specifications
A new and relevant testcase
Errors found at any stage
 Must track all issues if a bad design could be
manufactured were the issue not tracked
CSE497A Lecture 2.124
© Vijay, PSU, 2000
Issue Tracking Systems
 The Grapevine
» Casual conversation between members of a design team in
which issues are discussed
» No-one has clear responsibility for solution
» System does not maintain a history
 The Post-it System
»
»
»
»
The yellow stickies are used to post issues
Ownership of issues is tenuous at best
No ability to prioritize issues
System does not maintain a history
CSE497A Lecture 2.125
© Vijay, PSU, 2000
Issue Tracking Systems (cont.)
 The Procedural System
» Issues are formally reported
» Outstanding issues are reviewed and resolved during team
meetings
» This system consumes a lot of meeting time
 Computerized Systems
»
»
»
»
Issues seen through to resolution
Can send periodic reminders until resolved
History of action(s) to resolve is archived
Problem is that these systems can require a significant effort
to use
CSE497A Lecture 2.126
© Vijay, PSU, 2000
Code Related Metrics
 Code Coverage Metrics - how thoroughly does verification
suite exercise code
 Number of Lines of Code Needed for Verification Suite - a
measure of the level of effort needed
 Ratio of Lines of Verification Code to Lines of Code in the
Model - measure of design complexity
 Number of source code changes over time
CSE497A Lecture 2.127
© Vijay, PSU, 2000
Quality Related Metrics
 Quality is subjective
 Examples of quality metrics
» Number of known outstanding issues
» Number of bugs found during service life
 Must be very careful to interpret and use any
metric correctly!!!
CSE497A Lecture 2.128
© Vijay, PSU, 2000
Descargar

No Slide Title