SE 501 Software Development
Processes
Dr. Basit Qureshi
College of Computer Science and Information Systems
Prince Sultan University
Lecture for Week 4
Contents
• What are Metrics…? Process, Project, Product
• Software Measurement
–
–
–
–
Size oriented Metrics
Function Oriented Metrics
Object Oriented Metrics
Metrics for Software Quality
• More on Metrics…
–
–
–
–
–
–
–
Metrics for Requirements
Metrics for Design
Metrics for Source Code
Metrics for Web Applications
Metrics for Testing
Metrics for Maintenance
Metrics for Quality
• Summary
SE 501 Dr. Basit Qureshi
2
Bibliography
• Pressman, Software Engineering: A practitioners Approach, 7th
Ed. 2009
• Managing Software Development version 2, Companion Guide,
Carnegie Mellon University, pp 169-175, 2000.
• Ian Sommerville, Software Engineering, 9th edition, Addison
Wesley, 2010.
• Humphrey, Watts S. Managing the Software Process, Addison
Wesley, 1999
• Hunter, Robin B. and Thayer, Richard H. Software Process
Improvement. Los Alamitos: IEEE Press. 2001
SE 501 Dr. Basit Qureshi
3
WHAT ARE METRICS…?
SE 501 Dr. Basit Qureshi
4
What are metrics…?
• Metrics are:
– Measurements
– Collections of data about project activities
– Resources
– Deliverables
• Metrics can be used to help estimate projects,
measure project progress and performance,
and quantify product attributes
SE 501 Dr. Basit Qureshi
5
What are Metrics?
• Software process and project metrics are quantitative
measures
• They are a management tool
• They offer insight into the effectiveness of the software
process and the projects that are conducted using the
process as a framework
• Basic quality and productivity data are collected. These
data are analyzed, compared against past averages, and
assessed
• The goal is to determine whether quality and productivity
improvements have occurred
• The data can also be used to pinpoint problem areas
• Remedies can then be developed and the software
process can be improved
6
A Quote on Measurement
“When you can measure what you are speaking about
and express it in numbers, you know something about
it; but when you cannot measure, when you cannot
express it in numbers, your knowledge is of a meager
and unsatisfactory kind; it may be the beginning of
knowledge, but you have scarcely, in your thoughts,
advanced to the stage of science.”
LORD WILLIAM KELVIN (1824 – 1907)
7
Reasons to Measure
• To characterize in order to
– Gain an understanding of processes, products, resources,
and environments
– Establish baselines for comparisons with future assessments
• To evaluate in order to
– Determine status with respect to plans
• To predict in order to
– Gain understanding of relationships among processes and
products
– Build models of these relationships
• To improve in order to
– Identify roadblocks, root causes, inefficiencies, and other
opportunities for improving product quality and process
performance
8
Software metrics…?
• Software metrics can be classified into three
categories:
– Product metrics (size, complexity, performance)
– Process metrics (used to improve development and
maintenance)
– Project metrics (cost, schedule, productivity)
SE 501 Dr. Basit Qureshi
9
Metrics in Process Domain
• Process metrics are collected across all projects and
over long periods of time
• They are used for making strategic decisions
• The intent is to provide a set of process indicators that
lead to long-term software process improvement
• The only way to know how/where to improve any
process is to
– Measure specific attributes of the process
– Develop a set of meaningful metrics based on these
attributes
– Use the metrics to provide indicators that will lead to a
strategy for improvement
SE 501 Dr. Basit Qureshi
10
Metrics in Process Domain
• We measure the effectiveness of a process by
deriving a set of metrics based on outcomes of
the process such as
– Errors uncovered before release of the software
– Defects delivered to and reported by the end users
– Work products delivered
– Human effort expended
– Calendar time expended
– Conformance to the schedule
– Time and effort to complete each generic activity
SE 501 Dr. Basit Qureshi
11
Metrics in Project Domain
• Project metrics enable a software project manager to
–
–
–
–
–
Assess the status of an ongoing project
Track potential risks
Uncover problem areas before their status becomes critical
Adjust work flow or tasks
Evaluate the project team’s ability to control quality of
software work products
• Many of the same metrics are used in both the
process and project domain
• Project metrics are used for making tactical decisions
– They are used to adapt project workflow and technical
activities
SE 501 Dr. Basit Qureshi
12
Metrics in Project Domain
• The first application of project metrics occurs during estimation
– Metrics from past projects are used as a basis for estimating time and
effort
• As a project proceeds, the amount of time and effort expended
are compared to original estimates
• As technical work commences, other project metrics become
important
– Production rates are measured (represented in terms of models created,
review hours, function points, and delivered source lines of code)
– Error uncovered during each generic framework activity (i.e,
communication, planning, modeling, construction, deployment) are
measured
SE 501 Dr. Basit Qureshi
13
Metrics in Project Domain
• Project metrics are used to
– Minimize the development schedule by making the adjustments
necessary to avoid delays and mitigate potential problems and risks
– Assess product quality on an ongoing basis and, when necessary, to
modify the technical approach to improve quality
Benefits
• As quality improves, defects are minimized
• As defects go down, the amount of rework required during the
project is also reduced
• As rework goes down, the overall project cost is reduced
SE 501 Dr. Basit Qureshi
14
MORE ON METRICS…?
SE 501 Dr. Basit Qureshi
15
Measures, Metrics and Indicators



A measure provides a quantitative indication of
the extent, amount, dimension, capacity, or
size of some attribute of a product or process
The IEEE glossary defines a metric as “a
quantitative measure of the degree to which a
system, component, or process possesses a
given attribute.”
An indicator is a metric or combination of
metrics that provide insight into the software
process, a software project, or the product itself
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
16
A Good Manager Measures
process
process metrics
project metrics
measurement
product metrics
product
What do we
use as a
basis?
• size?
• function?
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
17
Why Do We Measure?





assess the status of an ongoing project
track potential risks
uncover problem areas before they go
“critical,”
adjust work flow or tasks,
evaluate the project team’s ability to
control quality of software work products.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
18
Process Measurement

We measure the efficacy of a software process
indirectly.


That is, we derive a set of metrics based on the
outcomes that can be derived from the process.
Outcomes include
• measures of errors uncovered before release of the
software
• defects delivered to and reported by end-users
• work products delivered (productivity)
• human effort expended
• calendar time expended
• schedule conformance
• other measures.

We also derive process metrics by measuring the
characteristics of specific software engineering tasks.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
19
Process Metrics Guidelines







Use common sense and organizational sensitivity when
interpreting metrics data.
Provide regular feedback to the individuals and teams who
collect measures and metrics.
Don’t use metrics to appraise individuals.
Work with practitioners and teams to set clear goals and
metrics that will be used to achieve them.
Never use metrics to threaten individuals or teams.
Metrics data that indicate a problem area should not be
considered “negative.” These data are merely an indicator for
process improvement.
Don’t obsess on a single metric to the exclusion of other
important metrics.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
20
Software Process Improvement
Process model
Process improvement
recommendations
Improvement goals
Process metrics
SPI
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
21
Process Metrics

Quality-related


Productivity-related


error categorization & analysis
Defect removal efficiency


Production of work-products related to effort expended
Statistical SQA data


focus on quality of work products and deliverables
propagation of errors from process activity to activity
Reuse data

The number of components produced and their degree
of reusability
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
22
Project Metrics





Effort/time per software engineering task
Errors uncovered per review hour
Scheduled vs. actual milestone dates
Changes (number) and their
characteristics
Distribution of effort on software
engineering tasks
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
23
A. Metrics for Requirements Model
• The function point metric (FP), first proposed by Albrecht
[1979], can be used effectively as a means for measuring the
functionality delivered by a system.
• Function points are derived using an empirical relationship
based on countable (direct) measures of software's information
domain and assessments of software complexity
• Information domain values are defined in the following manner:
–
–
–
–
–
number of external inputs (EIs)
number of external outputs (EOs)
number of external inquiries (EQs)
number of internal logical files (ILFs)
Number of external interface files (EIFs)
• Metrics for Specification Quality [1993]
SE 501 Dr. Basit Qureshi
24
B. Metrics for Design Model
• 1. Architectural Design Metrics
• 2. Object Oriented Design Metrics
• 3. Class Oriented Metrics
SE 501 Dr. Basit Qureshi
25
B. Metrics for Design Model
1. Architectural Design Metrics


Defined by Card and Glass [1990]
Architectural design metrics



Structural complexity = g(fan-out)
Data complexity = f(input & output variables, fan-out)
System complexity = h(structural & data complexity)

HK metric: architectural complexity as a function of
fan-in and fan-out

Morphology metrics: a function of the number of modules
and the number of interfaces between modules

Design Structure Quality Index (DSQI) proposed
by US Airforce Systems Command 1987.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
26
B. Metrics for Design Model
2. Object-Oriented Metrics



Whitmire 1997 described nine measurable
characteristics of OO design
Number of scenario scripts (use-cases)
Number of support classes (required to
implement the system but are not immediately related
to the problem domain)


Average number of support classes per key
class (analysis class)
Number of subsystems (an aggregation of
classes that support a function that is visible to the
end-user of a system)
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
27
B. Metrics for Design Model
3. Class-Oriented Metrics
Proposed by Chidamber and Kemerer [Chi94]:






weighted methods per class
depth of the inheritance tree
number of children
coupling between object classes
response for a class
lack of cohesion in methods
The MOOD Metrics Suite [Har98b]:



Method inheritance factor
Coupling factor
Polymorphism factor
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
28
C. Metrics for Source Code








Halstead’s Software Science [1977]:
(count and occurrence) of operators and
operands within a component or program
N1=Total Number of operator occurrences
n1= number of distinct operators in program
N2=Total Number of operands occurrences
n2= number of distinct operands in program
N = n1 log n1 + n2 log n2
Volume = N log (n1+n2)

It should be noted that Halstead’s “laws” have generated
substantial controversy, and many believe that the underlying
theory has flaws. However, experimental verification for selected
programming languages has been performed (e.g. [FEL89]).
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
29
D. Metrics for Web Applications








Number of static Web pages (the end-user has no control over
the content displayed on the page)
Number of dynamic Web pages (end-user actions result in
customized content displayed on the page)
Number of internal page links (internal page links are pointers
that provide a hyperlink to some other Web page within the
WebApp)
Number of persistent data objects
Number of external systems interfaced
Number of static content objects
Number of dynamic content objects
Number of executable functions
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
30
E. Metrics for Testing


Testing effort can also be estimated using
metrics derived from Halstead measures
Binder [Bin94] suggests a broad array of
design metrics that have a direct influence on
the “testability” of an OO system.






Lack of cohesion in methods (LCOM).
Percent public and protected (PAP).
Public access to data members (PAD).
Number of root classes (NOR).
Fan-in (FIN).
Number of children (NOC) and depth of the
inheritance tree (DIT).
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
31
F. Maintenance Metrics

IEEE Std. 982.1-1988 [IEE94] suggests a software
maturity index (SMI) that provides an indication of the
stability of a software product (based on changes that
occur for each release of the product). The following
information is determined:
• MT = the number of modules in the current release
• Fc =
the number of modules in the current release that
have been changed
• Fa =
the number of modules in the current release that
have been added
• Fd =
the number of modules from the preceding release
that were deleted in the current release

The software maturity index is computed in the following
manner:
• SMI = [MT - (Fa + Fc + Fd)]/MT

As SMI approaches 1.0, the product begins to stabilize.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
32
G. Metrics for Measuring Quality




Correctness — the degree to which a program
operates according to specification
Maintainability—the degree to which a program
is amenable to change
Integrity—the degree to which a program is
impervious to outside attack
Usability—the degree to which a program is
easy to use
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
33
G. Metrics for Measuring Quality
Defect Removal Efficiency
DRE = E /(E + D)
where:
E is the number of errors found before
delivery of the software to the end-user
D is the number of defects found after
delivery.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
34
SOFTWARE MEASUREMENTS
SE 501 Dr. Basit Qureshi
35
Categories for Software Measurement
• Two categories of software measurement
– Direct measures of the
• Software process (cost, effort, etc.)
• Software product (lines of code produced, execution speed, defects
reported over time, etc.)
– Indirect measures of the
• Software product (functionality, quality, complexity, efficiency,
reliability, maintainability, etc.)
• Project metrics can be consolidated to create process
metrics for an organization
SE 501 Dr. Basit Qureshi
36
KLOC
• Derived by normalizing quality and/or productivity
measures by considering the size of the software
produced
• Thousand lines of code (KLOC) are often chosen as the
normalization value
• Metrics include
–
–
–
–
–
–
Errors per KLOC
Errors in KLOC per person-month
Defects per KLOC
KLOC per person-month
Dollars per KLOC
Pages of documentation per KLOC
SE 501 Dr. Basit Qureshi
37
KLOC
• Size-oriented metrics are not universally accepted as
the best way to measure the software process
• Opponents argue that KLOC measurements
–
–
–
–
Are dependent on the programming language
Penalize well-designed but short programs
Cannot easily accommodate nonprocedural languages
Require a level of detail that may be difficult to achieve
• Many problems
– The ambiguity of the counting – meaning is not the same
with Assembler or high-level languages
– What to count? Blank lines, comments, data definitions, only
executable lines..?
– Problematic for productivity studies: the amount of LOC is
negatively correlated with design efficiency
SE 501 Dr. Basit Qureshi
38
Example: LOC Approach
Average productivity for systems of this type = 620 LOC/pm.
Burdened labor rate =$8000 per month, the cost per line of
code is approximately $13.
Based on the LOC estimate and the historical productivity
data, the total estimated project cost is $431,000 and the
estimated effort is 54 person-months.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
39
Function Point (FP)
• Based on a combination of program characteristics
–
–
–
–
external inputs and outputs
user interactions
external interfaces
files used by the system
• A weight is associated with each of these
• The function point count is computed by multiplying
each raw count by the weight and summing all values
SE 501 Dr. Basit Qureshi
40
Function Point (FP)
• Function point count modified by complexity of the
project
• FPs can be used to estimate LOC depending on a
language factor
– LOC = AVC * number of function points
– AVC is a language-dependent factor
• FPs can be very subjective, depend on estimator
– Automatic function-point counting maybe impossible
SE 501 Dr. Basit Qureshi
41
Function Point (FP)
System Elements and their Complexity
SE 501 Dr. Basit Qureshi
42
Function Point (FP)
• Count the following:
–
–
–
–
–
inputs
outputs
inquiries
logical internal files
external interfaces
• Apply “simple, average, complex” multipliers
• Apply the 14 adjustment factors (such as designed for
reuse? in a heavy traffic environment? etc.)
SE 501 Dr. Basit Qureshi
43
Function Point (FP)
• Compute the technical complexity
factor (TCF)
– Assign a value from 0 (“not present”) to 5
(“strong influence throughout”) to each of
14 factors such as transaction rates,
portability
– Add 14 numbers => total degree of
influence (DI)
TCF = 0.65 + 0.01 * DI
– Technical complexity factor (TCF) lies
between 0.65 and 1.35
• The number of function points (FP)
is given by
FP = UFP * TCF
SE 501 Dr. Basit Qureshi
44
Function Point (FP)
Converting Function Points to Lines of Code
LOC/Function Code Point
Language
SE 501 Dr. Basit Qureshi
45
Function Point (FP)
• Like the KLOC measure, function point use also has
proponents and opponents
• Proponents claim that
– FP is programming language independent
– FP is based on data that are more likely to be known in the
early stages of a project, making it more attractive as an
estimation approach
• Opponents claim that
– FP requires some “sleight of hand” because the computation
is based on subjective data
– Counts of the information domain can be difficult to collect
after the fact
– FP has no direct physical meaning…it’s just a number
SE 501 Dr. Basit Qureshi
46
Function Point (FP)
• Relationship between LOC and FP depends upon
– The programming language that is used to implement the software
– The quality of the design
• FP and LOC have been found to be relatively accurate predictors
of software development effort and cost
– However, a historical baseline of information must first be established
• LOC and FP can be used to estimate object-oriented software
projects
– However, they do not provide enough granularity for the schedule and
effort adjustments required in the iterations of an evolutionary or
incremental process
SE 501 Dr. Basit Qureshi
47
Example: FP Approach
The estimated number of FP is derived:
FPestimated = count-total * [0.65 + 0.01 * S (Fi)]
FPestimated = 375
organizational average productivity = 6.5 FP/pm.
burdened labor rate = $8000 per month, approximately $1230/FP.
Based on the FP estimate and the historical productivity data, total estimated
project cost is $461,000 and estimated effort is 58 person-months.
These slides are designed to accompany Software Engineering: A Practitioner’s Approach, 7/e
(McGraw-Hill 2009). Slides copyright 2009 by Roger Pressman.
48
Summary
•
•
•
•
•
What are Metrics…
Software Measurement
Integrating Metrics in Software Process
FP Estimation example
Summary
SE 501 Dr. Basit Qureshi
49
Descargar

SE 501 Software Development Processes - PSU