The mizuiro effect

November 20, 2012
Each language and each model has its strengths and limitations. A language can sensitize you to certain types of issues, but at the same time it may leave you with a blind spot for other types of issues. I call that the Mizuiro effect. A business analyst should be aware of the strengths and limitations of each language and each model (s)he uses. By applying at least two complementary languages or models, the business analyst can reduce the risk of omissions.

The linguistic relativity principle
In 1940 Benjamin Lee Whorf introduced the “linguistic relativity principle”:
“users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world”.

At first many people were sceptical about this principle. Nowadays there is a lot of scientific evidence to support a certain amount of influence of grammar on cognition. One example is the paper by Athanasopoulos et al: “Representation of colour concepts in bilingual cognition: The case of Japanese blues.

Japanese divides the blue region of colour space into a darker shade called ‘ao’ and a lighter shade called ‘mizuiro’. English does not have two distinct words (just ‘blue’ , which can be modified to ‘dark blue’ or ‘light blue’. The paper shows that Japanese bilinguals who used English more frequently distinguished blue and light blue less well than those who used Japanese more frequently. The authors conclude that linguistic categories affect the way speakers of different languages evaluate objectively similar perceptual constructs.

When I first read this, it reminded me of the “Eskimo words for snow” claim. This is the (apparently not entirely correct) claim that Eskimos have an unusually large number of words for snow.
The eskimo-words-for-snow claim
Though this particular claim may not be entirely correct, recent research like “The case of Japanese blues” does show that language affects our perception (and possibly vice versa), at least to some extent.  It seems each language has its strengths and weaknesses. My guess is that the Eskimo-Aleut languages are strong at specifying different snowy conditions, but weak at distinguishing varieties of tropical hardwood trees.

Strengths and limitations of language
The strengths and limitations of language also impact my work as a business analyst, in many different ways. For example:
  • Natural language is inherently ambiguous.
  • Subject matter experts often have their own specialized vocabulary.

Models and many requirements specification techniques are languages of a sort. I see them as highly specialized languages designed for a particular purpose. Being specialized exaggerates the Mizuiro effect: a specialized language is great for analyzing or specifying the kind of issues that is was designed for, but often hopelessly inadequate for other issues. Take use cases for example: they are great for identifying & specifying tasks to be performed by the system, but not so good for describing concepts and the relationships between concepts.

Complementary languages
If you are aware of the strengths and limitations of the languages, models and techniques you use (lets just call then languages for simplicity), then you can apply those languages effectively. In most cases you will have to use different languages, and those languages must complement each other: the strengths of one language make up for the limitations of the other. In that context, Stephen Ferg’s analogy with chocolate is quite entertaining.
This is true regardless of the development approach being used: waterfall, agile or any other approach shouldn’t rely on a single language. (Yes, dear Scrum practitioners, this applies to you too. Only using user stories to the exclusion of all else is risky. Why not throw in a data dictionary or the odd decision table?)

Further reading
The influence of the Mizuiro effect on business analysis & requirements specification has been recognized a long time ago, and there are many approaches to provide guidance on how to deal with it. A relatively old and very extensive example is the Zachman framework. My personal favourites on this topic are:

Ian Alexander. Ian’s book ‘Discovering Requirements‘ (with Ljerka Beus-Dukic) is based around a matrix consisting of requirements elements (stakeholders, goals, context, scenarios, qualities and constraints, rationale, definitions, measurements, priorities) and discovery contexts (from individuals, from groups, from prototypes, from archeology, from standards & templates, from trade-offs).

Soren Lauesen. Soren’s book “Software Requirements – Styles and Techniques” groups techniques into e.g. data requirment styles, functional requirement styles, functional details, interfaces, and quality requirements. He lists the advantages and disadvantages for each technique.

Ellen Gottesdiener. Ellen is my favourite when it comes to this topic. It features in all her books, but I particularly recommend her brand new book Discover to Deliver (with Mary Gorman). The book introduces an ‘Options Board’ with 7 product dimensions: user, interface, action, data, control, environment, and quality attribute.

Don’t be blue
We are all affected by the Mizuiro effect, and our requirements models are too. I try to turn it into my advantage by combining multiple complementary languages. How do you deal with the Mizuiro effect?

Too much detail?

October 28, 2012
Have you ever asked yourself (or a colleague) how detailed the requirements should be? It is a question I get asked frequently, and one I ask myself quite often. The correct answer is: “it depends”. In this post I hope to provide some answers that are a bit more actionable than “it depends”. Here are some considerations to help you work out the correct level of detail.
What is the next step?
The ‘right’ level of detail largely depends on what the requirements will be used for, i.e. what the next step in the process is. Obvious? Sure, but nevertheless often overlooked. Some examples of requirements uses are:
  • as input for the initial business case & subsequent go/no-go decision;
  • as a basis for COTS software selection;
  • to make a size estimate based with function point analysis;
  • as a guideline for yourself when designing an app;
  • to discuss with the product owner before the next sprint;
  • to provide to external parties as part of an EU public online tender;
  • as input to a test strategy workshop.
Looking at the above list, it is clear that they don’t all call for the same level of detail.
Who will use this?
In a similar way to “what is the next step?” the right level of detail may vary depending on who is going to use these requirements. How much or how little does that person know about the business domain? How much time is (s)he going to be prepared to spend on reading and understanding your requirements? Does (s)he have a different cultural background, and if so: how does that affect the way they interpret your requirements?
When is this needed?
One great thing about Agile is the renewed focus on just-in-time delivery of requirements. Requirements change, so if you write them down long before the solution is needed there is a chance that the requirement has changed before the solution is delivered. The more detailed the requirement, the more likely this is. It is sensible to delay writing the details until they are actually needed.
There is catch to this: when you think you need the details may not be when you actually need them! Details sometimes come with nasty surprises (e.g. unanticipated complexity, significant architectural impact, etc.). Why do you think someone coined the phrase “The devil is in the detail”?
Do the stakeholders care?
If you are getting into some nitty gritty details and wondering if these are relevant requirements or just arbitrary solution suggestions, congratulate yourself: you should be! That doesn’t mean they aren’t needed, it just means you are right to want to check. Try to find out if the stakeholders care about those details. If they do (I mean, if they really do care), then you are probably still specifying requirements rather than solution suggestions.
For example, if the stakeholder is requesting a specific colour you may think that is trivial or irrelevant, but perhaps the colour is required to comply with a standard.
What is the impact of too little detail?
Too much detail is sometimes the result of fear: fear that you’ll get a ‘solution’ that meets the requirements, but doesn’t meet your needs. This is a valid concern, yet the remedy can be worse than the disease. To overcome the fear you might ask yourself: if I leave out further details in this area of requirements, what could go wrong? Is that bad? Is it worse than spending an extra couple of days writing detailed requirements?
A large fear of getting the wrong solution could be an indication that you don’t trust your supplier. Is that fair on your supplier? If yes, why not select a different supplier, or a different way of working together?
Consider the rationale
There are many more considerations that can help determine the right level of detail for your requirements, but the ones in this post should be a good starting point. I’d like to wrap up with a technique that I sometimes use when reviewing requirements. It doesn’t help avoid too detailed requirements being written, but it can help remove them before the people in the next step of the process have to deal with them.
The technique is quite simple: find a suspiciously detailed requirement and ask why that requirement is needed. (As an alternative, if the requirement has a rationale, then use that as the answer to the why-question.) Next, consider what would happen if you replace the requirement with the answer to the why-question. If that is sufficiently detailed, then it is probably better than the original requirement.
I won’t go into more detail here, but if this raises any questions please contact me!

Using context to reduce ambiguity

August 22, 2012
Words derive meaning from their context. The meaning provided by the context can reduce ambiguity. For requirements, reducing ambiguity is a good thing, so it pays to keep requirements and their context together in some way.
Compare these 2 sentences:

1. There is a 4 mile traffic jam on the A2 in the direction of Amsterdam.

2. There is a 4 mile traffic jam on the A2 in the direction of Utrecht.

Now take a few minutes to answer the following question:

Which of the sentences is the least ambiguous?

If you are not familiar with Dutch topography and the Dutch road network, you can still make relevant observations. For example:

  • Both sentences fail to specify the direction of the traffic jam unambiguously. An unambiguous specification of a direction requires either a compass bearing (“in southerly direction”) or a from – to construction (“from Utrecht to Amsterdam”). 
Since this applies to both sentences you could be tempted to conclude that they are equally ambiguous. However, anyone who regularly commutes to Amsterdam on the A2 (like myself) and anyone else with sufficient knowledge of the Dutch road network knows that the A2 ends (or starts) at Amsterdam. Thus, the phrase “in the direction of Amsterdam” in sentence 1 can only mean “in a northerly direction”. Given the context, you would conclude that sentence 2 is more ambiguous than sentence 1.

Requirements need context too

In the traffic-jam example we saw that context can help reduce ambiguity. (Not always though, as was the case with the second sentence.) The same is true for requirements: providing context can help reduce ambiguity.

If your requirements are nothing more than a list of “shall-statements”, then you have no context to help reduce potential ambiguity. You are making it much harder for yourself than necessary!

What is the context for requirements?

The business environment is typically the context that is needed for requirements. This could be the departments goals and strategy, their products or services, the types of customers they serve, local rules and regulations, business processes, the personnel and their jobs, skills and cultural background, the operating environment – any or all of those things could be relevant context. Even requirements themselves can provide context to other requirements.

There are many different ways to make sure the requirements and their context remain connected: clustering related requirements together, maintaining traces between requirements and context elements, visual techniques (rich pictures, context diagrams, object models, virtual windows etc.), using requirements attributes, document layout (such as headers, sections, indentation),  user story templates, a guided tour of the office – just to name a few.

Which technique(s) you use may vary – it depends on stakeholder preferences, available tooling etc. The key thing is that you provide the relevant context in some way or other!

But watch out!

Remember that we are not striving for 0% ambiguity (or 100% unambiguous-ness?). The initial question should not be translated to “Which requirement is the least ambiguous?”, but to “Are these requirements sufficiently unambiguous for the intended purpose and taking into account the knowledge of the involved parties?“.

Requirements for reports

July 31, 2012
Requirements are not all the same. Functionality can be captured with e.g. user stories or use cases, but these techniques are less suited to some other aspects such as quality and constraints. One area that seems to get little attention is reports. How do you collect report requirements? And what is a good way of specifying them concisely and clearly?

How do I start collecting requirements for a report?
Stakeholders may want lots of different reports, and it seems new report types are needed on demand. Before you know it, you’ll be spending huge amounts of time writing requirements for those reports. That is not neccessary: determine which reports are the most import ones and do those first. After you’ve done those, re-evaluate: how many more reports must be specified, if any?

To figure out which reports are most import, apply DAD. DAD stands for Decision-Actor-Data and is a simple reminder of the logical order to collect relevant information for report requirements.

  1. Decision: The key to good report requirements is to start by figuring out which decision or action the report supports. What needs to happen based on the report? “To be informed” is not a valid answer. Either something tangible or practical happens because of the report, or you don’t need it.
  2. Actor: The next element to pursue is the actor. Who is going to make that decision or take that action? In practice, the decision and the actor are closely related, so they are elicited at almost the same time. Often an actor will be a stakeholder in the project and say ‘I need this information in this format!’ Your job is to focus them on the decision, and then ask who else might make similar or related decisions. You can find other decision-actor pairings by looking at the processes that are triggered by the report, or the processes for which the report is an outcome.
  3. Data: At first you should ignore most data elements and all presentation and formatting. Only after you have determined the decision-actor pairings should you start to figure out the categories of data that the actors need for their decision-making.  Usually it is sufficient to list the categories of data (e.g. ‘customer address data’, ‘monthly invoice totals’ etc.) and leave the details to a later stage (e.g. during design).
Which details must I capture?
When determining what is needed for a particular report, I typically use the following questions:
  1. When (e.g. ‘after process step X’) or how often (e.g. monthly) is the report needed?
  2. Is the information in the report sorted? If so, by what? Ascending or descending?
  3. Is the information in the report filtered or selected? If so, how? Can the user choose the selection?
  4. Does the report have a header? If so, which information should be displayed there?
  5. Does the report have a footer? If so, which information should be displayed there?
  6. Is the information in the report body grouped or aggregated? If so, how? Are any calculations needed?
  7. How much data (e.g. pages, records, rows) does the report typically contain? How quickly must it be produced: within seconds, minutes or hours?
  8. Confidentiality level of the data
Remember, you don’t have to get this information for all of the reports. Do the most import ones, and if you’ve noticed a few unusual actor-decision pairs you may want to spend a bit of time to investigate those reports too. These unusual reports may lead you to different categories of data that the system must process. For the other reports, you can probably get away with a generic statement like “The system shall provide 20 simple reports similar to report A and 5 complex reports similar to report B.” You can then work out these reports at a later date. Another option is to consider a ‘report generator’ functionality.
How do I write the requirements concisely and clearly?
For those reports that you have worked out in detail, you’ll have collected quite a bit of information. It can be tricky to write it all down in an understandable and coherent way. I’ve seen people try to specify a report with lists of “shall-statements”, and it is not pretty! Of course, there is no 1-size-fits-all solution for this. The presentation of requirements depends largely on the audience: the stakeholders and designers or suppliers. They can have very different preferences. An approach which has worked for me is to combine a tabular format for DAD and virtual windows for the layout.
Here is a partial example of the tabular format for a report requirement:
Requirement Id 1234
Why (Decision or action supported) Check which patients may be contaminated by a specific outbreak, for containment purposes.
Used by (Actor) Ward Nurse
When / how often Infrequent. Only if a suspected outbreak of a dangerous contagious disease is being investigated.
User selection Patient Id
Which data
Section 1: List of wards + dates
Section 2: For each ward: list of Patient Name  & Contact details
Sorted by Section 1 sorted by date
Section 2 sorted by date
Grouped by n/a
Filtered by Section 1: Show only data for the selected patient (i.e. the infected patient)
Section 2: Show only data for patients who were in the specified ward at the specified date (i.e. who were in the same ward as the infected patient).
Limits Maximum ward size is 50 patients.
Available within Max. 5 minutes
Note that in the above example I’ve left out most attributes of the requirement itself, such as owner, submitter, status and priority.
And below is a partial example of virtual windows to go with it; it shows the header, section 1 and part of section 2.
When collection requirements for reports, first focus on the decision which the report supports, and the actor(s) that use it. Only after you’ve verified that the report is relevant should you start to collect data requirements for the report. A combination of a tabular format with a virtual window may be a good way of presenting the requirements to stakeholders and designers.
What are your tips for collecting and presenting requirements for reports? I’d love to hear!

Differentiation requirements

August 26, 2011

Requirements play a significant part in selecting a COTS (Commercial Off The Shelf) package. The requirements process for COTS selection is not the same as the requirements process for bespoke development. For COTS selection there is more emphasis on differentiation: identifying how your organization differs from other organizations, and identifying how the candidate COTS packages differ from each other.

If you want to select the most suitable COTS package for your business it is not enough to play golf with each of the sales representatives. While a round of golf may provide many useful insights, it is prudent to check the COTS package against a list of selection criteria. “Selection criteria” is just another phrase for “requirements”. The main difference is that selection criteria are often formulated as questions, whereas requirements are formulated as statements.

When running a project to select a suitable COTS package, one key concern is determining the right requirements (a.k.a. selection criteria). If you apply a typical requirements process, you may end up with too many requirements. Collecting large numbers of requirements that none of the available COTS packages support is inefficient and can undermine stakeholder acceptance. Collecting requirements that all the available COTS packages support may seem good but does not help achieve the project goal: to select a package.

Conversely, some key requirements for COTS selection are often ignored in the starting stages of a project. A prime example are interoperability requirements: which existing systems (hardware, software, processes) will the selected package have to interface with? To determine the level of suitability in this area you quite likely have to delve into details such as interface specifications, protocols, or hardware platforms. Put bluntly: your IT legacy could be one of the differentiators.

To avoid these pitfalls, the requirements manager must tailor the requirements process to focus on collecting the key requirements for package selection. Assuming your list of candidate COTS packages contains relevant candidates, you don’t have to spend much time or effort to verify basic functionality: they wouldn’t be on the list if they didn’t offer the basics. Focus instead on what makes your organization different from other organizations using such packages. Also look at what the packages have to offer, so you don’t spend too much time talking about fancy features you’ll never get. Also investigate the differences between the packages and determine which of these differences matter to you.

This focus on differentiating requirements is not just an issue for the requirements manager and his team. It is crucial that all stakeholders are aware of this focus, to ensure their contribution is focused too. You should still expect to collect some off-topic requirements, but significantly less than without this explicit focus. And it helps greatly with expectation management.

In summary: requirements for COTS package selection must focus on differentiation. What makes your business different? Which differences between candidate packages does your business care about? This does not mean you don’t need any other requirements, but you probably need less of them and they can be less detailed.

42 SMART requirements

June 8, 2011

Congratulations. You have painstakingly crafted 42 perfectly formed, unambiguous requirements. They are as SMART (Specific, Measurable, Attainable, Relevant, Time-bound) as can be. A job well done!

Or is it? I think it is highly unlikely to be a good job.

Even leaving aside the discussion you can have about “Relevant” (such as: who decides if it is relevant; relevant in which context. etc.) there are many questions you could, and should, raise. Here are just a few that spring to mind:

  • Are these all the relevant requirements? Each requirement on it’s own may be relevant, but how many other relevant requirements are there that you missed? If the answer is “yes, these are all the relevant requirements”, follow up with: how do you know that these are all the relevant requirements?
  • Are these requirements consistent with each other? Each requirement on its own can be fine, but put together they could be nonsensical or contradictory
  • What is the purpose of these requirements? They could be SMART enough for an initial go/no-go, but useless for the Elbonian software company that needs to develop the solution. For starters, they are not in Elbonian! (or is it Elbonese?)
  • How do these requirements relate to each other? If this is a set of requirements, they must have some kind of relation to each other. They could be at different abstraction levels (parent-child relations, such as: “There’s a house at the top of a tree” – “In the house there is a room” – “In the room there is a chair”.), some of them could be related chronologically (“First check that the road is clear” – “Then cross the road”), one requirement could constrain another (“Exterminating may only be done by Daleks”), and so on. These relationships had better be clear, or the developers could interpret them differently.
  • How important and how urgent is each requirement? If we can only finish 30 of them within time and budget, which ones would you rather have? Do your boss and your neighbour’s wife agree?

I’ll leave it at this. It’s far more fun to think up your own questions, and I’m sure you’ve got the gist by now: you need at least 43 SMART requirements!

Interviews for eliciting requirements – what is different?

June 21, 2010

During my career (if you can call it a career – maybe I should say “during my working life”) I’ve conducted many an interview. The purpose of some of those interviews was to elicit (collect) requirements. I didn’t think there was much of a difference, until I started teaching other people to hold interviews for requirements elicitation. That there really is a difference became even more noticeable when I got someone to do two different kinds of interviews in a row.

To find out for yourself, try out the following:
Get someone to interview you for 10 minutes with the aim of being able to draw a floor plan of your house (just 1 floor is more than enough in most cases). Observe what happens. Then ask that same person to interview you for 10 minutes with the aim of collecting requirements for your next house. Again, observe.

What differences might you see?
With a moderately skilled interviewer, I’d expect so see a few noticeable differences such as:

  • More detailed questions for the floor plan than for the requirements.
  • For the floor plan the interviewer is more likely to draw a picture during the interview, and use his or her analysis of the evolving picture to guide the interview.
  • More frequent checks to verify understanding for the floor plan, due to the specific and tangible nature of the subject.
  • For the floor plan the interview will be less fluent, as the interviewer regularly takes time to analyse his/her notes.

Are these differences necessary?

I think not. I would hope that in a good requirements interview the interviewer also uses pictures and simple models for analysis and verification, etc. In other words, the interviewer doing the floor plan interview exhibits desired techniques which would be equally usefull for requirements elicitation and analysis. I guess these differences could be due to the more tangible, specific nature of an existing, well known house versus vague ideas of a possible future house.

There are some other differences as well, which I think are necessary:

  • When eliciting requirements, you would be interviewing a stakeholder representing a stakeholder group. It is important that the stakeholder provides answers which reflect the consensus of the group he or she represents; these may sometimes be different to their personal preference.
  • The answers of the stakeholder should be in line with the mandate of the stakeholder group. While it may be interesting to know what the representative of the marketing department has to say on the subject of compliance, for example, but if this subject is the responsibility of the risk & compliance department you should not rely on the answers from the marketing rep.

What are your experiences with interviews – in particular interviews to elicit and analyze requirements?

Volatile measures

March 11, 2010

Many clients working towards CMMI maturity level 2 have to deal with measurement & analysis as well as requirements management. In fact they are expected to measure their requirements process. They often resort to the good old standby of measuring “requirements volatility”.  Until I ask them why…

You may have a good reason to measure requirements volatiliy. If you do, please write it down because it should be fundamental to your measurement & analysis process. At maturity level 1 and 2, most companies don’t have a good understanding of their processes. I would therefor expect most measurement indicators to focus on gaining an understanding of the process: indicators that answer questions such as “what factors have a significant impact on the effort or quality of my requirements process?”. To answer this requires trial and error. You may have a hunch that, say, the quality of the coffee has a significant influence on the effort required to develop requirements. In that case, you must develop measures to put this assumption to the test. Whatever the results, it will be valuable because you will learn something. You will either learn that the quality of the coffee is indeed of significant impact – in which case you can move on to control the quality of the coffee and after that to improve the quality of the coffee. Or you will learn that the quality of the coffee does not have a significant impact, in which case you must develop a different theory and put that to the test. (Note: you may still want to keep the quality of the coffee at an acceptable level – it may have an impact on other processes…)

So, how does requirements volatility fit into this understand-control-improve scheme? Presumably, at maturity level 1 or 2, it is based on an assumption that requirements volatility is significant in some way. The tale I’m often told is that a high requirements volatility indicates that the requirements are not ready for the next stage of the development process. To me that means the requirements are not stable enough to create a baseline. Unfortunately, most definitions for measuring requirements volatility are set up to measure changes to requirements after a baseline has been created. Either this is based on a very different definition of ‘baseline’ (more like my definition of ‘snapshot’), or the indicator cannot be used during the crucial early stages. Sure, requirements volatility can be of use in later stages of a project. However, when used only in the later stages: what does it tell us about the requirements process? Well, some say, a high requirements volatility shows that the quality of the baselined requirements was insufficient. This is an assumption, and I would hope that the first step is to determine if it is a valid assumption. So I would prefer to start collecting measures that can clarify if the assumption is valid or not, before leaping off and taken action on that assumption.

In summary, while requirements volatility may be a useful indicator in some organizations it should not be the first one to adopt. Also, initial indicators could be short-lived (volatile, even): as the organization finds out which assumptions hold and which don’t, they move on to the next set of indicators.

Who needs acceptance criteria?

July 9, 2009

I meet a lot of testers (collective noun for test managers, test coordinators, test engineers, and test process improvers), so it’s inevitable that I regularly end up in a discussion about requirements versus acceptance criteria. A long time ago I held the view that you didn’t need acceptance criteria. The reasoning behind this was something like:
– You should write clear requirements (many people talk about SMART requirements).
– If your requirements are clear, you don’t need acceptance criteria.
– If your requirements are not clear, you haven’t done a good job of writing requirements.

Many people pointed out that it is hard to write clear requirements, and many testers therefor suggested that you always need acceptance criteria. At the very least you could use them as an intermediate step to get from vague requirements to clear requirements. While I accept that acceptance criteria can be a helpful aid in improving the quality of requirements, I do not think they are always needed. Particularly for function requirements (or “functional requirements” if you insist), a good requirements process, requirements structure, requirements checklist and the necessary skills should do to get clear requirements (I prefer not to use SMART, but that is perhaps a topic for a different post).

I am indebted to some of my requirements colleagues for helping me understand that there is a need for acceptance criteria in relation to certain types of requirements. I have not yet got a good definition which types of requirements – lets say quality requirements for now, but I think it is only a subset of quality requirements for which it is really needed. Let me explain with an example.

One quality aspect which is important to many IT systems is availability. This is often expressed in “up-time”, as in the percentage of time in which the system is “up”, i.e. available. A requirement such as “Availability of 99,8%.” is not precise enough on its own. You must be clear on what must be available – this can be done by tracing to the functions to which this applies. In addition, you must specify to whom it must be available. Also important is to define what the 100% is, to understand what 99,8% is. That is, you must specify if e.g. “scheduled down-time” is included or not and you must identify the “sample frequency”, i.e. how often you measure and check. All this can be done, and you would then have a clear requirement. Congratulations!

But wait! Even though the requirement may be clear, there is still a problem. To validate this requirement you may have to test the system for a very long time. This depends on the definitions, but usually availability is related to time periods of months or years and thorough validation would require testing it for multiples of this time period. Rarely is the customer prepared to wait that long, and this is where acceptance criteria come into play. An acceptance criteria allows you to specify less stringent conditions for acceptance without changing the requirement itself. So the customer could agree to accept the system if during 2 weeks of availability testing the system has an availability of 99,99%, for example.

To me that is the essence of acceptance criteria: they allow you to specify alternate requirements (less or more stringent) for the sole purpose of accepting the delivered system. The “proper” requirements remain valid when the system is in operation.

As you can probably tell, my thinking on acceptance criteria is still developing. I would really appreciate any thoughts, additions and challenges you have on this topic.

The requirements island

June 24, 2009

What is a requirement?
Hang on; I don’t want to start another war on what the best definition of “requirement” is. We have too many of those already.
I just want to reflect on something that has been bugging me the last few months (sorry). That is: why do so many people talk about requirements as if they are something completely separate?

“A requirement must be solution independent.” or “That’s architecture, not requirements”, and also “Those are goals, we don’t need to capture them.”

To me requirements are not an island, to be kept well away from the dangerous influences of other landmasses. Please, no! Requirements are just one kind of information in a universe full of related and relevant other kinds of information: goals, stakeholders, architectural principles, designs, test cases, business rules, processes, object models… the list is endless.

Requirements on their own are meaningless. Documents with only a long list of requirements are useless and should be forbidden! Requirements derive meaning from their context, the information surrounding it. Relating a set of requirements to a process step helps understand the requirements, and helps us check if the set is correct and complete.

Mind you, that doesn’t mean that the requirements discipline should take on the modelling of all this information so they can relate it to requirements. Preferably not – each to their own. But in the requirements discipline there is nothing splendid about “splendid isolation”. I’m all for the Bazaar approach to requirements, not the meticulously crafted requirements Cathedral (see Eric Raymond’s “The Cathedral and the Bazaar“).