Monday, 18 February 2008

Book Review: Software Testing Fundamentals by Marnie L. Hutcheson

image Driven to provide ways of providing better information to her customers, Marnie Hutcheson has identified techniques for identifying and structuring her test scope to allow her to provide estimates, negotiate and agree a prioritised scope, and report progress against that. All of which sounds like the makings of a great book.

But I have to say that it ended up as a strange little book. Unfortunately a lot of the book read like padding so I ended up skipping useful information and backtracking, and I did get confused by the book at times.
I think you can safely skip chapter 1 and just read the summary at the end of the chapter.
If you skip Chapter 2 you will miss some useful information so I suggest skipping to the middle of chapter 2 where Marnie discusses Quality as "getting the right balance between timeliness, price, features, reliability, and support to achieve customer satisfaction". While she relates this to the product under test, I think you can relate it to the test process itself and if you read the rest of the chapter in this light it becomes quite interesting.
The section in Chapter 2 relating to "picking the correct quality control tools for your environment" provides encouragement and advice on:
  1. automate your record keeping,
  2. improve your documentation techniques
  3. use pictures to describe systems and processes
  4. choose appropriate methods and metrics that help you and/or the client
Chapter 3 starts slowly but explains some useful rules:
  1. state the methods you will follow, and why
  2. state assumptions
then goes on to examine some methods of organising test teams. But I think you can probably skip the chapter and just read the summary.
The book starts to add value in Chapter 4 where it discusses the "Most Important Tests (MITs) Method".
MITs, as I understood Marnie's explanation of it:
  1. Build a test 'inventory' of all the stuff you know: assumptions, features, requirements, specs, etc.
  2. Expand the inventory into 'test's.
  3. Prioritise the inventory and related tests
  4. Estimate effort
  5. Cost the effort and negotiate the budge - as this dictates the scope of the inventory you can over
  6. Define the scope - an ongoing activity
  7. Measure progress
  8. Reflect on what happened to allow you to improve
I've paraphrase it above as Marnie does not use those exact words and the italic words are my summary keywords of the approach.
An exploration then follows of the MITs method in a plan driven, and in an Agile environment. The Agile environment does not match the Agile environments I have worked in so I found it difficult to relate exactly to what I do. Despite the useful thoughts presented here, I would have concerns if any tester in my Agile environment explained what they do in terms of the actual presentation in this book. I would have fewer concerns if they explained it in the 'spirit' of this book, or the generalised approach - perhaps using MITs lite.
The metrics chapter examines: time, cost, bug# (per attribute: component, severity, found date, etc.), some test coverage metrics, a Defect Detection Percentage  etc. If you get stuck for metrics then you'll find some in here that might work for you.
Chapter 6, 7 and 8 discuss the test inventory in more detail.
  • How to construct one through analysis of requirements, interviews, usage, system structure, data, inspiration.
  • What they can look look like, as spreadsheets, powerpoint, documents, tables
I found generally useful approach and experience documented here.
The 2 chapters on risk result in a heavily analysed inventory to identify scope and priority. I think  you should view this as a fairly typical presentation of risk and priority. The depth of coverage does highlight the importance that the MIT method places on analysis, agreement of importance and negotiation of contract, and I think you will gain some value from reading.
Two chapters cover structural path analysis. They cover one of my favourite techniques of drawing the diagrams to explain my understanding, and briefly mention using them in a dynamic way to build up a model as you 'do' something. Unfortunately, my main takeaway point was the use of path analysis as an estimation tool.
Data analysis - through paths, on boundaries, combinations - receives a quick overview and has some experience embedded within it.
So I come to the end of my reading and I found this a difficult book to read. I did not find its structure conducive to my understanding. Some of the topics that I use a lot - path analysis, data analysis - didn't lead me to believe that at the end of reading it that testers would use those techniques effectively.
I do recommend the basic principles of scoping and negotiation and if you haven't done that type of work before, and can get into the same rhythm as the book then it can probably help you in those tasks.
The basic notion of inventories and outlines seems perfectly sensible to me, but as a whole the method seemed too heavy.
I have used approaches similar to those listed in the book because I thought they were necessary for the project. But in hindsight I think they were necessary for me, at that time in my development as a tester, on those projects.
I think Marnie knows when to use her methods deeply and when to use a lite version, and how to tailor it. But I don't think her full experiences of using the approach really get communicated to the reader to allow them to do that.
I think that this book aims at the right audience of Beginner/intermediate tester. But I don't think the book communicates its underlying principles as well as it could. I think you will need to work the book to dig them out. But if you haven't used some of the techniques I've listed in this review then I think you will gain experience by reading the book and trying them.
Related Links

Sunday, 17 February 2008

5 acronyms that software testers should learn from

I count Google video as one of, if not the, best self-training resources currently available to me. So on Google video here are 5 acronyms that you can use for your self education as a tester: AAFTT, BBST, GTAC, SHMOOCON, OWASP.

  1. AAFTT Agile Alliance Functional Testing Tools visioning workshop

  2. BBST Black Box Software Testing Course

  3. GTAC Google Test Automation Conference

  4. SHMOOCON The Shmoo Group's Software Hacking Conference

  5. OWASP Open Web Application Security Project (OWASP)

Monday, 4 February 2008

Book Review: Testing Computer Software by Kaner, Falk, Nguyen

imageI thought I'd read this again for review purposes. I didn't expect it to surprise me, but it did, massively.
One of the most realistic testing books available, starting almost immediately in the preface discussion "Its not done by the book". The book sets out its target audience as simply "the person doing the testing"

"...find and flag problems in a product, in the service of improving its quality. your reports of unreliability in the human-computer system are appropriate and important...You are one of few who will examine the full product in detail before it is shipped."
This 'definition' works better for me than Ron Patton's definition, but Ron's book reads more gently and easily. 'Testing Computer Software' contains a lot of very direct opinions from the authors which you will see presented as authoritative is'ms (this is X) which may distance the reader if the reader currently adopts a very different mindset - which I think happened to me on first reading. So if it happens to you, don't switch off, don't skim. Analyse your response. Read this book in a better way than I did.
Ron's book only targets beginners whereas Testing Computer Software works for both beginners and more experienced testers - if the 'more experienced' mind doesn't rebel too quickly.
I don't think I read this book properly the last time I read it. Certainly I wasn't doing explicit exploratory testing at the time and I think I dismissed the text as a little too ad-hoc. But just a few pages in I can now see that the book outlines some lessons that I then had to learn through experience e.g. "Always write down what you do and what happens when you run exploratory tests." Sigh, if only I had read the book properly first time round. 
Chapter 1 starts with an overview of 'exploratory' testing and a possible strategy that an experienced tester might adopt. A 'show' don't 'tell' approach to explaining software testing.
1st cycle of testing
  1. Start with an obvious and simple test
  2. Make some notes about what else needs testing
  3. Check the valid cases and see what happens
  4. Do some testing "on the fly"
  5. Summarize what you know about the program and its problems
2nd cycle of testing
  1. Review responses to problem reports and see what needs doing and what doesn't
  2. Review comments on problems that won't be fixed. They may suggest further tests.
  3. Use your notes from last time, add your new notes to them, and start testing
I found some notes that I made when I read the book first time through lodged inside the cover. At the time I first read the book I took umbrage at the notion that "the best tester is the one who gets the most bugs fixed." I now read that as "the best tester finds the bugs that matter most". But I still find myself reticent about using the phrase "the best tester is" as that suggests an 'always' to me and I really can't say that that statement would 'always' apply.
Chapter 2 sets out the various ground rule axioms so the reader doesn't have to learn them the hard way e.g. "you can't test a program completely" "you can't test it works correctly" etc.
Chapter 3 seems like a general reference section on test types but even here we find good old fashioned, hard won experience, box outs which challenge your thinking.
Chapter 5 - reporting and analysing bugs works well on repeated reading and everyone involved in testing would benefit from re-reading it occasionally.
Problem tracking (chapter 6) pulls no punches in its description of the 'real' world that I have encountered and you may well encounter on some projects;
  • "Don't expect any programmer or project manager to report any bugs"
  • "Plan to spend days arguing whether reports point to true bugs or just to design errors."
  • ...
Fortunately the chapter contains a lot of advice as well:
  • Each problem report results from an exercise in judgement where you reached the conclusion that a "change is worth considering"
  • Hints on dealing with 'similar' or 'duplicate' reports (and how to tell them apart)
The Test Design chapter (7) speeds through a whole set of useful 'stuff' and again has plenty of experience behind it to learn from.
Most people will not test printers so chapter 8 presents the opportunity for the reader to deconstruct it and learn some generalised 'lessons' otherwise the obvious temptation results in skipping it and learning nothing.
Skipping across to Chapter 12 I see that I learned the very important lesson that the test plan can act as a tool as well as a product from this book, and that for me was worth the initial time with the book as it clarified a lot of thoughts in my head and helped me approach the particular project I worked on at the time in a different way; incrementally building up my thoughts on the testing, making my concerns and knowledge gaps visible.
I did not find this an easy book to read. Even on a second reading.  I frequently felt mentally bludgeoned by authoritative sentence phrasing. Which for a book that embraces exploratory testing and contextual thinking I find a strange dichotomy.
But don't let this stop you reading the book. This book deserves its best-selling status, and still deserves to sell in vast quantities. The writers have crammed so much practical advice in here that I heartily recommend it.
I can see that my thinking has changed since I last read the book. Which sadly suggests to me that the book wasn't a 'persuasive' argument for this 'experienced' tester at the time when I really needed it to help me. So please, gentle reader, if you consider yourself an 'experienced' tester try and read it with a clear mind.
If you consider yourself a beginner then you will probably get a lot out of the book immediately - Chapter One alone pays for the price of admission.

Book Review: Systematic Software Testing by Rick Craig & Stefan Jaskiel

image Anytime I approach a book now I try to get my initial prejudices and preconceptions sorted and out of my head to let me approach the book more clearly. My initial preconceptions of Systematic Software Testing have led to it sitting on my shelf for a long time. I've seen Rick lecture and he does that very well, a little overly metric focused compared to my general approach, but presumably that has worked for him and his clients in the past.
The title suggests a very formal heavily IEEE template and structure driven test methodology. But I also know that Rick has a military background and since that demands structure, heavy doses of pragmatism, different level of decision making, setting objectives and responding to the needs on the ground. So I expect that practicality to shine through. I wonder what I'll really find inside...

We start the book by learning that the authors intended to write a contextual guide and upon reading what they had written discovered they had writing a set of guidelines which they encourage people to build from.
Chapter 1 provides us an introduction to ambiguity and the 'methodology' "Systematic Test and Evaluation Process" where the tester conducts their test thinking across a number of 'levels' - examples of levels provided include 'program' (unit), 'acceptance' etc.
Test Approach =
  • *Level
    • *Phase
      • Plan,
      • Acquire
        • Analyse,
        • Design,
        • Implement
          • *Task
      • Measure
The process reads: as soon as some input becomes available to work from start to plan at high level, work out what you have to do in more detail creating test designs as quickly as possible to highlight ambiguity and map these to their derivation source, do the testing and measure how well you did. Obviously I just wrote a very high level summary, but you can guess that this approach spools out a lot of cross-reference coverage information and documentation.
The authors promote the IEEE standard document templates as a basis for the test plans. Most readers will use the outlines provided in the book, rather than buying the expensive 'real' thing. STEP also provides a description of the roles of involved in testing.
The Risk Analysis chapter promotes the categorisation of "Software Risk Analysis" and "Planning Risk Analysis". The process listed results in a very structured approach to weighting and evaluating the risk associated with Features. I suspect that testers reading it may end up missing some risks as the description concentrates on features so the tester will likely miss architectural risks related to the interaction of components, or environment risks, but the description here focuses more on managing, evaluating and weighting the risks, than on identifying the risks.
Artech house book page graciously hosts chapter 3 so you can view it for yourself.
Some  advice in the chapter that I give to testers myself: consider the audience, highlight areas of the plan that you have uncertainties about in the plan itself.
The chapter focuses a lot on the 'test plan' as a 'thing' rather than the process of test planning so I can not recommend this book in isolation. Read it in conjunction with a book that describes the test planning process like Testing Computer Software or Patton's Software Testing. Also read James Bach’s Test Plan Building Process and Test Plan Evaluation Model. Since many testers early in their career don't know how to communicate the results of their test planning, this chapter will serve them better than reading some notes on the IEEE template in isolation.
Detailed Test Planning contains a discussion on Acceptance testing, how and when to involve users, that should prove useful to testers early in their career or those approaching acceptance testing for the first time. The chapter provides an overview of integration testing, and system testing but it feels very high level.
The unit testing discussion could probably shrink and have more effect, as I think the general advice of "Developers should be responsible for creating and documenting unit testing processes" could probably stand alone as effective advice for the tester when the tester does not develop themselves.
The unit testing section here could do with a little updating in light of all the writing currently available for Test Driven Development - admittedly the reader is pointed at Kent Beck's White XP book "Extreme Programming Explained: embrace change" (although the title listed in the text of Systematic Software Testing (at least my copy) misses out the 'explained' part).
The Analysis and Design chapter starts with a useful discussion of how to turn the documents provided on a development project into an coverage outline, or inventory (to use the book's terminology rather than mine). Then goes on the expand it into a test coverage matrix, or Inventory Tracking Matrix (to use the book's terminology). This approach can result in a lot of time spent doing the documentation rather than the testing but if your organisation views that as important then the discussion here may very well help you. I suggest that where possible (if you do this), you use a test tool to help you maintain these links to avoid repeating work. High level overviews of Equivalence Partitioning and other techniques then follow.
Some of the advice presented in this chapter "It's a good idea to add the test cases created during ad-hoc testing to your repository" reads too absolute - perhaps some ad-hoc tests should end up in the repository, but you should consider why? What did it cover that you hadn't covered? Did all data go down the same path you covered before? Perhaps you should automate it in a data driven test. Did it find a bug? etc. So I had some problems with this chapter and I recommend you treat it as a very high level overview and read a Software Testing Techniques book instead [Copeland][Beizer][Binder].
The Test implementation chapter focuses on the environment issues related to testing and this section does provide useful advice to the tester describing the importance of getting the right people involved and some useful information on the data, but again this section provides an overview than a detailed analysis. The book then moves on to test automation so the chapter tries to cover a lot of ground.
The metrics section came in shorter than I expected it to and provides various defect based metrics and coverage metrics. I hoped to find a better discussion of metrics here but again it felt fairly superficial and unfortunately the metrics used didn't seem to justify all the 'other' work that test tester did. Filling in all those forms and documents and writing the tests cases didn't seem to contribute to the overall test effectiveness metrics - I did expect to see that 'formalism' contributing to the metrics to help justify it. You can find defects without all that formalism, and you can track coverage achieved without all that formalism. So I found myself a little dissatisfied with this, but the chapter presents the typical metrics reported on many projects so you can see the formulas and make up your own mind if they actually measure 'test effectiveness'.
The short, and yet useful, Test Organisation chapter discusses different ways to approach the organisation of test teams.
Chapter 9 - 'The software tester' discusses interviewing techniques and then, unconvincingly for me, promotes certification.
The Chapter on 'The Test Manager' covers leadership of a test team in a very practical way.
I expected Test Process Improvement presented exclusively using CMM, TPI, TMM and ISO 9000, but fortunately the chapter starts with a general model of improvement before moving on to cover those topics. I hope the reader focuses their improvement effort on the first half of the chapter rather the 2nd half, and instead uses the 2nd half as a general overview of possibilities of 'what could we do'.
For me, this book works best when explaining how you can approach the documentation of testing in a very formal structured testing environment. The actual practice of testing gets a very overview treatment and the reader will need to look elsewhere to get that information. I would not recommend this as an "if you only read one testing book" book as I don't feel it gives you enough information to fully explore the contextual situation that you will find yourself in. Had I had this book as a junior tester, in very formal environments and documenting and tracking my testing in the way that this book describes I would have managed to shortcut my learning.
So it isn't a book for everyone. But if you don't work in the type of 'formal' environment described above then you may find this a useful book to see how the 'other half live'.