Unified Process
on
Software Implemenation & Testing
CEN 5016
Software Engineering
Dr. David A. Workman
School of EE and Computer Science
March 20, 2007
Implementation (USP)
•
Purpose
To translate the design into machine-readable and executable form. Specifically to:
– Plan system integrations required in each implementation increment or iteration
– Distribute the system by mapping executable components to nodes in the
deployment model.
– Implement design classes and subsystems found in the Design Model.
– Unit test the components, and integrate them by compiling and linking them
together into one or more executables, before they are sent to integration and
system tests.
•
Artifacts
– Implementation Model
• Components: <executable>, <file>, <library>, <dbtables>, <document>
• Interfaces
• Implementation subsystems
–
–
–
–
Components
Implementation Subsystems
Interfaces
Build Plan
March 20, 2007
(c) Dr. David A. Workman
2
Integration & System Test (USP)
•
Purpose
To verify the result of each build and to validate the complete system via acceptance
tests.
– Plan tests required in each iteration, including integration and system tests.
Integration tests are required after each build, while system tests are done as part
of client acceptance and system delivery.
– Design and implement test plans by creating test cases. Test cases specify what to
test and define procedures and programs for conducting test exercises.
– Perform various test cases to capture and verify test results. Defects are formally
captured, tracked and removed before delivery.
Verification: testing a work product to determine whether or not
they conform to the product's specifications.
"Did we build the system right?" (Boehm)
Validation: testing a complete system to determine whether or not
it satisfies requirements and solves the problem users
need to have solved.
"Did we build the right system?" (Boehm)
March 20, 2007
(c) Dr. David A. Workman
3
Integration & Test (USDP)
•
Concepts
– Test Strategies
• Black box testing: demonstrating correct component or subsystem behavior by
observing output generated at its interface as a function of inputs supplied at its
interface.
• White box testing: demonstrating that valid computational paths and interactions
are observed internal to a component or subsystem as a function of given inputs.
– Test Types
• Installation tests: verify that the system can be installed and run correctly on the
client's platform.
• Configuration tests: verify that the system can run correctly in different
configurations.
• Negative tests: attempt to cause the system to fail by presenting inputs or loads for
which it was not designed.
• Stress tests: performance tests designed to identify problems when resources are
limited.
• Regression tests: after a change is made to a component, re-running tests that the
component had successfully pasted before the change was made to ensure that
previous capability remains valid
March 20, 2007
(c) Dr. David A. Workman
4
Integration & Test (USDP)
•
Artifacts
– Test Model
• A set of Test Cases
• A set of Test Procedures
• A set of Test Components
– Test Cases
Designed to verify certain use cases, or use case scenarios.
• Demonstrates that post-conditions of use cases are satisfied, if their pre-conditions
are met.
• Demonstrates that a sequence of actions defined by the use case is followed.
• Identifies the test components and their features to be tested.
• Predicts or describes expected component output and behavior.
• Defines the inputs necessary to produce the desired test output.
• Specifies the conditions that must exist to conduct the test case.
– Test Procedures
Specifies how to perform one or several test cases.
• Test programs (or "harnesses")(or "benches") and shell scripts may have to be
executed as part of a test procedure.
• Defines a sequence of steps that must be followed to complete the procedure, along
with the inputs and outputs expected to each step.
March 20, 2007
(c) Dr. David A. Workman
5
Integration & Test (USDP)
•
Artifacts
– Test Components
Automates one or several test procedures or parts of them.
• Test drivers
• Test scripts
• Test harnesses
– Test Plan
• Describes testing requirements, strategies, resources, and schedule for each build
and system release.
• Describes what test cases should be performed and passed for each build and/or
system release.
– Test Evaluations
Capture results of test cases; declares whether or not test case was successful; generates
defect or anomaly reports for tracking.
– Anomaly Reports
Document test anomalies – unexpected test events or results. Capture and track
anomalies during development. Ensure all anomalies have been satisfactorily
addressed or removed before delivery (customer signoff).
March 20, 2007
(c) Dr. David A. Workman
6
IEEE Std (829) for Software Testing
•
Test Plan
To prescribe the scope, approach, resources, and schedule of testing activities. To
identify items to be tested, the features to be tested, the testing tasks to be
performed, the personnel responsible for each task, and the risks associated with
the plan.
•
•
•
•
•
•
•
Test Design Spec
Test Case Spec
Test Procedure Spec
Test Item Transmittal Report
Test Log
Test Incident Report
Test Summary Report
March 20, 2007
(c) Dr. David A. Workman
7
Test Work Flow
Plan Tests
Design Tests
Analyze a
Class
Test
Engineer
Implement
Tests
Component
Engineer
Perform
Integration
Tests
Integration
Tester
Perform
System
Tests
System
Tester
March 20, 2007
(c) Dr. David A. Workman
8
Generic Testing for OO Systems
•
Reference
Testing Object-Oriented Systems: Models, Patterns, and Tool
by Robert V. Binder
Addison-Wesley, © 2000, ISBN = 0201809389
March 20, 2007
(c) Dr. David A. Workman
9
Testing OO Systems
•
Fault Models
A fault model answers a simple question about a test technique: Why do the features
called out by the technique warrant our effort? A fault model therefore identifies
the relationships and components of the system under test that are most likely to
have faults.
•
Bug Hazard
A circumstance that increases the chance of a bug. Example: type coercion in C++
is a hazard because the rules are complex and may depend on hidden
declarations when working with a particular class.
•
Test Strategies
1. Conformance-directed testing: seeks to establish conformance to requirements
and specifications. Tests are designed to be sufficiently representative of the
essential features of the system under test. (Implies a non-specific fault model –
any fault that violates conformance is equal to any other.)
2. Fault-directed testing: seeks to reveal implementation faults. It is motivated by
the observation that conformance can be demonstrated for an implementation
that contains faults. (Implies a specific fault model – directs testing toward
particular kinds of faults).
March 20, 2007
(c) Dr. David A. Workman
10
Testing OO Systems
•
Conformance- vs Fault-directed Testing
– Conformance-directed testing should be feature sufficient: they should, at least,
exercise all specified features of the system.
– Fault-directed testing should be fault efficient: they should have a high
probability of revealing a fault of a particular type or types.
•
Fault Hazards in OO Programming Languages
“Object-oriented technology does not in any way obviate the basic motivation for
software testing. In fact, it poses new challenges. … Although the use of OOPLs
may reduce some kinds of errors, they increase the chance of others.”
– Dynamic binding and complex inheritance structures create many opportunities
for faults due to unanticipated bindings or misinterpretation of correct usage.
– Interface programming errors are a leading cause of faults in procedural
languages. OO programs typically have many small components and therefore
more interfaces. Interface errors are more likely, other things being equal.
– State control errors are likely. Objects preserve state, but state control (the
acceptable sequence of events) is typically distributed over an entire program.
March 20, 2007
(c) Dr. David A. Workman
11
OO Fault Studies
•
Steven P. Fiedler: “Object-oriented unit testing” in Hewlett-Packard
Journal, Vol. 40, No. 2, April 1989.
In reference to a C++ system for a medical electronics application, Fiedler reports
that “on the average, a defect was uncovered for every 150 LOC, and
correspondingly, the mean density exceeded 5.1 per KSLOC.”
•
V. Basili, L.C. Briand, and W. L. Melo: “A validation of object-oriented
design metrics as quality indicators” in IEEE Transactions on Software
Engineering, Vol. 22, No. 10, October 1996.
The conclusions of a study done on small C++ systems ( less than 30 KSLOC ) are:
– Classes that send relatively more messages to instance variables and message
parameter objects are more likely to be buggy;
– Classes with greater depth ( number of superclasses ) and higher specialization
(number of new and overridden methods ) are more likely to be buggy;
– No significant correlation was found between classes that lack cohesion (number
of pairs of methods that have no attributes in common) and the relative
frequency of bugs.
March 20, 2007
(c) Dr. David A. Workman
12
OO Fault Studies
•
Martin Sheppard & M. Cartwright: “An empirical study of object-oriented
metrics” in a Tech Report, TR-97/01, Dept. of Computing, Bournmouth University,
U.K.
In reference to a 133 KSLOC C++ system for a telecommunications application, the report states,
“ classes that participated in inheritance structures were 3 times more defect prone than
other classes.”
•
Capers Jones: The economics of object-oriented software development Software
Productivity Research Inc., Burlington, MA., April 1997.
Summarizing data gathered from 150 development organizations and 600 projects, Jones
reports:
1. The OO learning curve is very steep and causes many first-use errors.
2. OO analysis and design seem to have higher defect potential than older design methods.
3. Defect removal efficiency against OO design problems seems lower than against older design
methods.
4. OOPLs seem to have lower defect potential than procedural languages.
5. Defect removal efficiency against programming errors is roughly equal to, or somewhat
better than, removal efficiency against older procedural language errors.
March 20, 2007
(c) Dr. David A. Workman
13
An OO Testing Manifesto
•
Observations
– The hoped-for reduction in OO testing due to reuse is illusory.
– Inheritance, polymorphism, late binding, and encapsulation present some new
problems for test case design, testability, and coverage analysis.
– To the extent that OO development is iterative and incremental, test planning,
design, and execution must be similarly iterative and incremental.
– Regression testing and its antecedents must be considered essential techniques
for professional OO development.
•
Guidance
– Unique Bug Hazards: Test design must be based on the bug hazards that are
unique to the OO programming paradigm.
– OO Test Automation: Application-specific test tools must be OO and must offset
obstacles to testability intrinsic to the OO programming paradigm.
– Test-effective process: The testing process must adapt to iterative and incremental
development and mosaic modularity. The intrinsic structure of the OO paradigm
requires that test design must consider method, class, and cluster scope
simultaneously.
March 20, 2007
(c) Dr. David A. Workman
14
An OO Testing Manifesto
•
Unique Bug Hazards
1. The interaction of individually correct superclass and subclass methods can be
buggy and must be systematically exercised.
2. Superclass test suites must be rerun on subclasses and should be constructed so
that they can be reused to test any subclass.
3. Unanticipated bindings that result from scoping nuances in multiple and
repeated inheritance can produce bugs that are triggered only by certain
superclass/subclass interactions. Subclasses must be tested at “flattened” scope,
superclass test cases must be reusable.
4. Poor design of class hierarchies supporting dynamic binding (polymorphic
servers) can result in failures of a subclass to observe superclass contracts. All
bindings must be systematically exercised to reveal these bugs.
5. The loss of intellectual control that results from spaghetti polymorphism ( yo-yo
problem) is a bug hazard. A client of a polymorphic server can be considered to
have been adequately tested only if all server bindings that the client can
generate have been exercised.
March 20, 2007
(c) Dr. David A. Workman
15
An OO Testing Manifesto
•
Unique Bug Hazards
6. Classes with sequential constraints on method activation and their clients can
have control bugs. The required control behavior can be systematically tested
using a state machine model.
7. Subclasses can accept illegal superclass method sequences or generate corrupt
states by failing to observe the state model of the superclass. Where sequential
constraints exist, subclass testing must be based on a flattened state model.
8. A generic class instantiated with a type parameter for which the generic class has
not been tested is almost the same as completely untested code. Each generic
instantiation must be tested to verify it for that parameter.
9. The difficulty and complexity of implementing multiplicity constraints can easily
lead to incorrect state/output when an element of a composition group is added,
updated, or deleted. The implementation of multiplicity must be systematically
tested.
10. Bugs can easily hide when an update method ( a method that changes instance
state) computes a corrupt state, the class interface does not make this corrupt
state visible (ie. providing a public feature for reporting a corrupt state or
throwing an exception), and the corruption does not inhibit other operations.
Def-use sequences of method calls must be systematically tested.
March 20, 2007
(c) Dr. David A. Workman
16
Functional vs. Object-Oriented Architectures
Class-A
X
Use Case Flow
X
Related computations
on different objects.
A
Public
Service
Layer
Y1
A
Y
A
B
Data &
Control
Flow
Control/
Boundary
Object
Messages
Z1
Y2
Z
C
Class-C
B
C
(a) Functional Architecture
March 20, 2007
Class-B
B
Z2
(b) Object-Oriented Architecture
(c) Dr. David A. Workman
17
Functional Dependencies Among Methods
B
A
1
1
2
2
E
3
1
2
1
2
C
1
3
D
March 20, 2007
(c) Dr. David A. Workman
18
Functional Threads (Use Cases)
A
B
A1
C
D
E
B1
A2
D2
E1
D1
E2
A3
C1
B2
D3
March 20, 2007
(c) Dr. David A. Workman
19
Hierarchical View of Design
main
A::One()
A::Two()
B::One()
D::Two()
A::Three()
C::One()
Use Case #1
E::One()
B::Two()
Use Case #2
D::One()
D::Three()
Use Case #3
E::Two()
March 20, 2007
(c) Dr. David A. Workman
20
Design & Testing Principles
Principle 1: Design should be performed “top-down” for
each functional thread defined by a Use Case; that is, the
interface and detailed design of a module should follow
the design of all modules that functionally depend on it.
A
B
C
D
Rationale: By performing interface and detailed design
top-down, we ensure that all requirements flow from
dependent modules toward the modules they depend on.
This principle attempts to postpone detailed design decisions
until all functional requirements for a module are known.
Principle 2: Coding and Unit Testing should be performed “bottom-up” for a functional
thread; that is, the unit testing of a module should precede the unit testing of all modules
that functionally depend on it.
Rationale: By performing unit testing bottom-up, we ensure that all subordinate modules
have been verified before verifying the module that depends on them. This principle
attempts to localize and limit the scope and propagation of changes resulting from unit
testing.
March 20, 2007
(c) Dr. David A. Workman
21
Design & Testing Schedules
See Notes
Development Layers for Detailed Design and Coding
L ay er 1
L ay er 2
L ay er 3
L ay er 4
L ay er 5
A 1, A 2, A 3
B1, D 2, C 1
E1, B2, D 3
D1
E2
Development Layers for Unit Testing
L ay er 1'
L ay er 2'
L ay er 3'
L ay er 4'
L ay er 5'
B1, E1,
E2, D 3
A 1, D 2, D 1
A 2, B2
C1
A3
Effort
Development Schedule
B1
A1 B1 E1
A2 D2 B2
A3 C1 D3
A1
E1
B3 D2 A2
D1 E2 E2 D1 B2 C1 A3
Time
Build #1 (Integration Test 1)
Build #2 (Integration Test 2)
Build #3 (System Test)
March 20, 2007
(c) Dr. David A. Workman
22
McCabe's* Cyclomatic Complexity
For a flow graph G:
V(G)* = E - N + p + 1
E = # edges in the flow graph
N = # nodes in the flow graph
p = number of independent
program components.
Branch nodes = 1,3 (fan out > 1)
Join nodes = 6, 7 (fan in > 1)
Sequential nodes = 0, 2, 4, 5
0
It can also be shown that if G
is a planar graph, then V(G)*
is the number of bounded regions
or faces of the graph.
1
A component can be represented
by a 1-entry/1-exit DAG (directed
acyclic graph).
In this example:
E=9
N=8
p=1
and
V(G)* = 3
3
2
McCabe proved that his metric
gives the number of linearly
independent flow paths through
the DAG. The number of LI paths
relate strongly to the testing
complexity of the component.
4
5
6
7
*NOTE: This formula is actually a variant of McCabe’s metric proposed by Brian Henderson-Sellers.
McCabe’s metric for p isolated components is given by: V(G) = E – N + 2p. Henderson-Sellers showed that
his variant gives a consistent value when the isolated components are treated as procedures connected to
their call sites.
March 20, 2007
(c) Dr. David A. Workman
23
McCabe's* Cyclomatic Complexity
Main()
0
A()
B()
A1
B1
B3
A2
1
3
2
Call A
A3
A4
B2
B4
V(A)* = 4 – 4 + 2 = 2
B5
B6
4
B7
5
Call B
V(main+A+B)* = 19 – 18 + 4 = 5
Note: p =3 (Independent Modules = Main, A, B)
V(B)* = 8 – 7 + 2 = 3
6
V(main)* = 7 – 7 + 2 = 2
March 20, 2007
(c) Dr. David A. Workman
24
McCabe's* Cyclomatic Complexity
Main()
0
A()
B()
A1
B1
B3
A2
1
A3
3
2
A4
Call A
B2
B4
B5
3’
B6
4
B7
5
Call B
5’
6
V(main+A+B)* = 23 – 20 + 2 = 5
March 20, 2007
Call nodes are “split” to
match the single entry and
exit nodes of the called
component. 1 node and 2
edges are added for each
called component.
(c) Dr. David A. Workman
Linearly Independent Paths are:
(0,1,2,4,5,B1,B2,B7,5’,6)
(0,1,2,4,5,B1,B3,B4,B6,B7,5’,6)
(0,1,2,4,5,B1,B3,B5,B6,B7,5’,6)
(0,1,3,A1,A2,A4,3’,4,5,B1,B3,B4,B6,B7,5’,6)
(0,1,3,A1,A3,A4,3’,4,5,B1,B3,B5,B6,B7,5’,6)
25
Example Method (C++)
void Server::Work()
{
ifstream input;
// input file stream
ofstream output;
// output file stream
Tray tray;
cout << "McDonald's Implementation in C++" << endl;
cout << "by Amit Hathiramani and Neil Lott" << endl;
cout << endl;
1
2 while(1) {
string szInputFileName;
3
cout << "Please enter the name of the input file: ";
cin >> szInputFileName;
input.open(szInputFileName.c_str());
5
4 if(!input) cerr << endl << "No file named " << szInputFileName << " found." << endl;
else break;
}
6
} //Server::Work
16
March 20, 2007
Insert
Segment
A
(c) Dr. David A. Workman
26
Example Method (C++)
Segment A
6
FoodItems *pFood;
7
while(!input.eof()) {
8
char szMarker[4];
input >> szMarker;
strupper(szMarker);
9 if(strcmp(szMarker, "$D") == 0) pFood = new Drinks; // drink 10
else
11 if(strcmp(szMarker, "$S") == 0) pFood = new Sandwiches; // sandwich 12
else
13 if(strcmp(szMarker, "") == 0) continue; // blank line; skip it
else throw InputException("Unknown type found " + string(szMarker));
14
pFood->Get(input);
tray.Add_Item(pFood);
} //while
March 20, 2007
15
(c) Dr. David A. Workman
27
Example Method (C++)
17
1
2
3
V(G)* = 19 – 16 + 2 = 5
Or
V(G)* = 21 – 17 + 2 = 6
14
5
4
16
6
7
March 20, 2007
System
exit
Exception
exit
13
Normal
exit
8
11
12
9
10
(c) Dr. David A. Workman
15
28
Design & Test Example:
Discrete Event Simulator
©Dr. David A. Workman
School of Computer Science
University of Central Florida
Use Case Diagram: Simulator
Simulation System
Specify
Input
Simulation
Input
File
Construct
World
Specify
Output
Simulation
User
Initialize
Simulation
Output
World
Objects
Simulation
Log
File
Simulate
World
Report
Simulation
Data
March 20, 2007
(c) Dr. David A. Workman
30
Simulation Architecture
March 20, 2007
(c) Dr. David A. Workman
31
Simulation Architecture: Design Template
Interface and Control Layer
Virtual World
*
EventMgr
*
All Active
Objects
2
*
This layer consists of all active object classes. Active objects must be
instances of some subclass of abstract class Agent. The simulation progresses as a result of Events created and
serviced by Agents. An Event has four components: a Sender agent, a Recvr agent, an instance of some Message
subclass, and an event time. When one Agent wishes to interact with another, it must do so by creating an Event
that defines a “future” time at which the interaction will occur. The message component defines an action to the
Recevr agent and possibly data necessary to complete the action.
This layer contains all the derived subclasses of Message.
These classes are used to pass data for servicing interaction events
between Agents. Only recipient Agent classes know the content and use
of instances of these classes. Methods of Agents receiving messages
optionally take parameters which are instances of one (or more) of the
Passive classes and return an instance of class Message or one of its subclasses. Instances of the root class Message carry no data and denote
signals that require some action on the part of the receiver Agent.
*
All Passive
Classes/Objects
Event
Agent
Agent Layer
(Active Objects)
Message
*
Other
Subclasses
1
Players
Message Layer
*
Passive Class Layer
The Passive layer contains all classes that model problem data and inanimate objects of the simulated world. Agents make direct
calls on passive objects, but must account for the simulation time consumed when doing so. Passive objects make direct calls to
each other, if necessary. Passive objects may be passed from one Agent to another as part of a instance of some Message subclass.
SimModels Classes
March 20, 2007
(c) Dr. David A. Workman
SimMgmt Classes
32
Simulation Architecture: Student Conversation
Interface and Control Layer
Conversation
*
EventMgr
*
Student
Event
2
*
Agent
Agent Layer
(Active Objects)
Message
1
QuestionMsg
AnswerMsg
Players
Message Layer
Passive Class Layer
Question
Answer
SimModels Classes
March 20, 2007
(c) Dr. David A. Workman
SimMgmt Classes
33
Design Graph: 1
Use Case 1
Class Conversation
1: Conversation()
0: Main()
4 Reusable Methods
9 New Methods
Class Agent
2: Agent()
3: operator>>()
4: Get()
1
5
3
2
6
Class Student
5: Student()
6: Extract()
7: Get()
7
4
March 20, 2007
(c) Dr. David A. Workman
34
Design Graph: 2
Use Case 2
5 Reusable Methods
9 New Methods
0: Main()
8
9
12
Class Agent
2: Agent()
3: operator>>()
4: Get()
11: NameOf()
21: ~Agent()
20
13
Class Event
18: Event()
2
11
14
15
16
18
19
21
17
10
March 20, 2007
(c) Dr. David A. Workman
Class Conversation
1: Conversation()
8: Initialize()
Class Student
5: Student()
6: Extract()
7: Get()
13: Initialize()
16: AcceptQuest()
Class EventMgr
-3: EventMgr()
19: postEvent()
Class Message
10: Message()
Class SpeakMsg
17: SpeakMsg()
Class Players
9: Players()
12: setAgent()
14: getAgent()
15: getOther()
20: ~Players()
35
Design Graph: 3
Use Case 3
2 Reusable Methods
3 New Methods
0: Main()
22
23
Class Conversation
1: Conversation()
8: Initialize()
22: Insert()
24
Class Student
5: Student()
6: Extract()
7: Get()
13: Initialize()
16: AcceptQuest()
24: Insert()
25: Put
25
26
March 20, 2007
Class Agent
2: Agent()
3: operator>>()
4: Get()
11: NameOf()
21: ~Agent()
23: oper<<()
26: Put()
(c) Dr. David A. Workman
Class Event
18: Event()
Class EventMgr
-3: EventMgr()
19: postEvent()
Class Message
10: Message()
Class SpeakMsg
17: SpeakMsg()
Class Players
9: Players()
12: setAgent()
14: getAgent()
15: getOther()
20: ~Players()
36
Design Graph: 4
Use Case 4
Class Conversation
1: Conversation()
8: Initialize()
22: Insert()
44: Simulate()
Class EventMgr
-3: EventMgr()
19: postEvent()
27: moreEvents()
28: getNextEvent()
0: Main()
44
27
28
2
29
23
35
36
38
30
24
33
25
34
26
March 20, 2007
37
31
40
42
39
43
41
18
17
32
Class Event
18: Event()
29: oper<<()
35: getRecvr()
36: getMsg()
43: ~Event()
Class Message
10: Message()
30: Oper<<()
31: Insert()
32: Put()
42: ~Message()
19
10 Reusable Methods
8 New Methods
10
(c) Dr. David A. Workman
Class Players
9: Players()
12: setAgent()
14: getAgent()
15: getOther()
20: ~Players()
Class SpeakMsg
17: SpeakMsg()
33: Insert()
34: Put()
38: getHandlr()
Class Agent
2: Agent()
3: operator>>()
4: Get()
11: NameOf()
21: ~Agent()
23: oper<<()
26: Put()
Class Student
5: Student()
6: Extract()
7: Get()
13: Initialize()
16: AcceptQuest()
24: Insert()
25: Put()
37: Dispatch()
39: doQuestion()
40: AcceptAnswr()
41: doAnswer()
37
Design Graph: 2
Use Case 5
0: Main()
1 Reusable Methods
3 New Methods
Total Methods
45
46
47
21 Reusable
27 New
---------------48
Total Classes
4 Reusable
4 New
--------------8
March 20, 2007
Class Event
18: Event()
29: oper<<()
35: getRecvr()
36: getMsg()
43: ~Event()
Class Agent
2: Agent()
3: operator>>()
4: Get()
11: NameOf()
21: ~Agent()
23: oper<<()
26: Put()
Class Student
5: Student()
6: Extract()
7: Get()
13: Initialize()
16: AcceptQuest()
24: Insert()
25: Put()
37: Dispatch()
39: doQuestion()
40: AcceptAnswr()
41: doAnswer()
47: ~Student()
(c) Dr. David A. Workman
Class EventMgr
-3: EventMgr()
19: postEvent()
27: moreEvents()
28: getNextEvent()
48: ~EventMgr()
Class Conversation
1: Conversation()
8: Initialize()
22: Insert()
44: Simulate()
45: WrapUp()
46: ~Conversation()
Class Message
10: Message()
30: Oper<<()
31: Insert()
32: Put()
42: ~Message()
Class Players
9: Players()
12: setAgent()
14: getAgent()
15: getOther()
20: ~Players()
Class SpeakMsg
17: SpeakMsg()
33: Insert()
34: Put()
38: getHandlr()
38
Scheduling
Use Case 4
26
25
24
27
23
28
2
14
9
10
31
33
29
11
30
40
17
13
17
Use Case 2
21
2
34
12
15
10
32
35
38
36
18
8
16
19
44
39
37
41
18
42
19
43
0: Main()
20
26
25
24
23
22
Use Case 3
5
45
Use Case 5
4
7
6
3
Use Case 1
March 20, 2007
1
47
46
Development Schedule
(c) Dr. David A. Workman
39
Scheduling
Agent::
Insert()
Agent::
Put()
Message::
getId()
Players::
~Players()
Message::
~Message()
Message::
Message()
Players::
Players()
Event::
Event()
EventMgr::
EventMgr()
Event::
getMsg()
Event::
getRecvr()
Event::
getTime()
Event::
getSendr()
Event::
operator<()
Agent::
Agent()
Agent::
~Agent()
Student::
Student()
Student::
~Student()
Agent::
Get()
EventMgr
::Insert()
EventMgr::
moreEvent()
EventMgr
::postEvent()
EventMgr::
getNextEvent()
Student::
Put()
Student::
Insert()
Use Case 3
Message::
Put()
Conversation::
~Conversation()
Agent::
Extract()
Student::
Get()
Message::
Insert()
Conversation::
WrapUp()
>>(Agent)
Student
::Extract()
<<(Agent)
<<(Message)
Conversation::
Insert()
<<(Event)
Use Case 4
Use Case 5
Conversation::
Simulate()
Conversation()
Use Case 1
Development Time
March 20, 2007
(c) Dr. David A. Workman
40
Descargar

Software Process - CS Division