The hangman problem … the final challenge
The rules of hangman have changed with respect to words that contain the
same letter multiple times.
Instead of all instances of the letter being found when the letter is played, only
a single instance needs to be displayed.
For example, in the word banana, if the player plays the letter ‘a’ as the first
move of the hangman game then the game could display either:
•-a---- or
• ---a-- or
• -----a
Your task is to analyse your design and identify which classes/methods need to be
updated in order to correctly implement these new rules.
Which tests need to be updated? Which can be re-executed? Which are now obselete?
Note this analysis somewhere for future use, but do not share the information with
your colleagues
November 2011
CSC7302: Testing & Metrics
The hangman problem … the final challenge
Now, take the hangman code of a colleague:
•Implement the new rules by changing their code
•Implement the new tests by changing their code
•Execute the tests
•How good was their design?
•Did you have to change something that was not anticipated by the original
•Did you change something that you did not have to change?
•Which metrics have changed as a result of this update?
November 2011
CSC7302: Testing & Metrics
The hangman problem … the final challenge
Tool/Method Analysis:
Which tools/methods did you find most useful in the development of the hangman?
What sort of things were you doing that you think could/should be automated?
What sort of support would you like for building quality code that you could imagine
being possible (in the future)?
What have you learned about metrics and tests?
November 2011
CSC7302: Testing & Metrics
Advanced Testing Tools
Automated Testing Tools:
•Configuration management and continuous integration
•Functional testing
•Regression testing
•GUI testing
•Tracing, profiling and performance monitoring
Automatic Test Generation:
•Symbolic Execution
•Model-Based Testing
•Mutation Testing
•Random Testing
What about testing with other verification techniques?
•Model Checking
•Theorem proving and refinement…
November 2011
CSC7302: Testing & Metrics
Automated Testing Tools
When Should a Test Be Automated?
Brian Marick, Testing Foundations, 1998,
There are hundreds of tools/plugins:
•IDE specific
•System type specific
•Problem domain specific
•Language specific
When starting on an industrial project, learn what is included in the build tools
November 2011
CSC7302: Testing & Metrics
Automatic Test Generation
Symbolic Execution
A type of abstract interpretation
Dynamic analysis of programs by tracking symbolic rather than actual values
Analysis is path-based – advantages and disadvantages
It is found in many advanced testing tools (like javapathfinder - )
November 2011
CSC7302: Testing & Metrics
Automatic Test Generation
Symbolic Execution
L. A. Clarke. 1976. A System to Generate Test Data
and Symbolically Execute Programs. IEEE Trans.
Softw. Eng. 2, 3 (May 1976), 215-222.
James C. King. 1976. Symbolic execution and
program testing. Commun. ACM 19, 7 (July 1976),
P. David Coward. 1988. Symbolic execution
systems—a review. Softw. Eng. J. 3, 6 (November
1988), 229-239.
Also See:
A Bibliography of Papers on Symbolic
Execution Technique and its Applications
November 2011
CSC7302: Testing & Metrics
Automatic Test Generation
Model-Based Testing
B. Korel. 1990. Automated
Software Test Data Generation.
IEEE Trans. Softw. Eng. 16, 8
(August 1990), 870-879.
S. R. Dalal, A. Jain, N. Karunanithi, J. M. Leaton, C. M. Lott, G. C. Patton, and B. M. Horowitz. 1999.
Model-based testing in practice. In Proceedings of the 21st international conference on Software
engineering (ICSE '99). ACM, New York, NY, USA, 285-294
Arilo C. Dias Neto, Rajesh Subramanyan, Marlon Vieira, and Guilherme H. Travassos. 2007. A survey
on model-based testing approaches: a systematic review. In Proceedings of the 1st ACM
international workshop on Empirical assessment of software engineering languages and
technologies: held in conjunction with the 22nd IEEE/ACM International Conference on Automated
Software Engineering (ASE) 2007
November 2011
CSC7302: Testing & Metrics
Automatic Test Generation
Model-Based Testing
You should know about techniques for your favourite languages:
•Modelling languages:
•Implementation Languages:
November 2011
CSC7302: Testing & Metrics
Automatic Test Generation
Mutation Testing
An Analysis and Survey of the Development of
Mutation Testing, Yue Jia and Harman, M., IEEE
Transactions on Software Engineering, vol.37, no.5,
pp.649-678, Sept.-Oct. 2011
November 2011
CSC7302: Testing & Metrics
Mutation Testing
concludes with an
adequacy score, known as
the Mutation Score, which
indicates the quality of
the input test set. The
mutation score (MS) is the
ratio of the number of
killed mutants over the
total number of nonequivalent mutants. The
goal of mutation analysis
is to raise the mutation
score to 1, indicating the
test set T is sufficient to
detect all the faults
denoted by the mutants.
Automatic Test Generation
Random testing
Writing unit tests is often tedious, difficult and time consuming, thus many
software engineers have developed techniques and tools for automatically
generating random unit tests.
There are advantages and disadvantages of this: can you think of these?
For Java, there are a number of free tools. Consider, for example:
If you want to try one of these out, there is a simple eclipse plugin for
November 2011
CSC7302: Testing & Metrics
Automatic Test Generation
Random testing: some additional reading
Duran, Joe W.; Ntafos, Simeon C.; , An Evaluation of Random Testing, Software Engineering,
IEEE Transactions on , vol.SE-10, no.4, pp.438-444, July 1984
Christoph Csallner and Yannis Smaragdakis, JCrasher: an automatic robustness tester for
Java. Softw. Pract. Exper. 34, 11 (September 2004), 1025-1050.
Patrice Godefroid, Nils Klarlund, and Koushik Sen. DART: directed automated random
testing. In Proceedings of the 2005 ACM SIGPLAN conference on Programming language
design and implementation (PLDI '05).
Catherine Oriat, Jartege: A Tool for Random Generation of Unit Tests for Java Classes, in
Quality of Software Architectures and Software Quality, Lecture Notes in Computer Science
2005, Volume 3712/2005, 242-256
Dick Hamlet. 2006. When only random testing will do. In Proceedings of the 1st international
workshop on Random testing (RT '06)
Carlos Pacheco and Michael D. Ernst. 2007. Randoop: feedback-directed random testing for
Java. In Companion to the 22nd ACM SIGPLAN conference on Object-oriented programming
systems and applications companion (OOPSLA '07).
November 2011
CSC7302: Testing & Metrics
Model Checking
Kenneth Lauchlin
McMillan. 1992. Symbolic
Model Checking: An
Approach to the State
Explosion Problem. Ph.D.
Dissertation. Carnegie
Mellon Univ., Pittsburgh,
Model Checking, E Clarke
Edmund M. Clarke, E.
Allen Emerson, and
Joseph Sifakis. 2009.
Model checking:
algorithmic verification
and debugging. Commun.
ACM 52, 11 (November
2009), 74-84.
November 2011
CSC7302: Testing & Metrics
Theorem proving and Testing
QUESTION: What are advantages and disadvantages of dynamic vs static analysis
approaches to program verification?
Combining dynamic and static methods for analyzing programs:
Could/Should get the best of both worlds?
Greta Yorsh, Thomas Ball, and Mooly
Sagiv. 2006. Testing, abstraction,
theorem proving: better together!. In
Proceedings of the 2006 international
symposium on Software testing and
analysis (ISSTA '06). ACM, New York,
NY, USA, 145-156.
November 2011
CSC7302: Testing & Metrics

Diapositive 1 - Telecom SudParis