Archive for the ‘requirements development’ Category

The mizuiro effect

November 20, 2012
Each language and each model has its strengths and limitations. A language can sensitize you to certain types of issues, but at the same time it may leave you with a blind spot for other types of issues. I call that the Mizuiro effect. A business analyst should be aware of the strengths and limitations of each language and each model (s)he uses. By applying at least two complementary languages or models, the business analyst can reduce the risk of omissions.

The linguistic relativity principle
In 1940 Benjamin Lee Whorf introduced the “linguistic relativity principle”:
“users of markedly different grammars are pointed by their grammars toward different types of observations and different evaluations of externally similar acts of observation, and hence are not equivalent as observers but must arrive at somewhat different views of the world”.

At first many people were sceptical about this principle. Nowadays there is a lot of scientific evidence to support a certain amount of influence of grammar on cognition. One example is the paper by Athanasopoulos et al: “Representation of colour concepts in bilingual cognition: The case of Japanese blues.

Japanese divides the blue region of colour space into a darker shade called ‘ao’ and a lighter shade called ‘mizuiro’. English does not have two distinct words (just ‘blue’ , which can be modified to ‘dark blue’ or ‘light blue’. The paper shows that Japanese bilinguals who used English more frequently distinguished blue and light blue less well than those who used Japanese more frequently. The authors conclude that linguistic categories affect the way speakers of different languages evaluate objectively similar perceptual constructs.

When I first read this, it reminded me of the “Eskimo words for snow” claim. This is the (apparently not entirely correct) claim that Eskimos have an unusually large number of words for snow.
The eskimo-words-for-snow claim
Though this particular claim may not be entirely correct, recent research like “The case of Japanese blues” does show that language affects our perception (and possibly vice versa), at least to some extent.  It seems each language has its strengths and weaknesses. My guess is that the Eskimo-Aleut languages are strong at specifying different snowy conditions, but weak at distinguishing varieties of tropical hardwood trees.

Strengths and limitations of language
The strengths and limitations of language also impact my work as a business analyst, in many different ways. For example:
  • Natural language is inherently ambiguous.
  • Subject matter experts often have their own specialized vocabulary.

Models and many requirements specification techniques are languages of a sort. I see them as highly specialized languages designed for a particular purpose. Being specialized exaggerates the Mizuiro effect: a specialized language is great for analyzing or specifying the kind of issues that is was designed for, but often hopelessly inadequate for other issues. Take use cases for example: they are great for identifying & specifying tasks to be performed by the system, but not so good for describing concepts and the relationships between concepts.


Complementary languages
If you are aware of the strengths and limitations of the languages, models and techniques you use (lets just call then languages for simplicity), then you can apply those languages effectively. In most cases you will have to use different languages, and those languages must complement each other: the strengths of one language make up for the limitations of the other. In that context, Stephen Ferg’s analogy with chocolate is quite entertaining.
This is true regardless of the development approach being used: waterfall, agile or any other approach shouldn’t rely on a single language. (Yes, dear Scrum practitioners, this applies to you too. Only using user stories to the exclusion of all else is risky. Why not throw in a data dictionary or the odd decision table?)

Further reading
The influence of the Mizuiro effect on business analysis & requirements specification has been recognized a long time ago, and there are many approaches to provide guidance on how to deal with it. A relatively old and very extensive example is the Zachman framework. My personal favourites on this topic are:

Ian Alexander. Ian’s book ‘Discovering Requirements‘ (with Ljerka Beus-Dukic) is based around a matrix consisting of requirements elements (stakeholders, goals, context, scenarios, qualities and constraints, rationale, definitions, measurements, priorities) and discovery contexts (from individuals, from groups, from prototypes, from archeology, from standards & templates, from trade-offs).

Soren Lauesen. Soren’s book “Software Requirements – Styles and Techniques” groups techniques into e.g. data requirment styles, functional requirement styles, functional details, interfaces, and quality requirements. He lists the advantages and disadvantages for each technique.

Ellen Gottesdiener. Ellen is my favourite when it comes to this topic. It features in all her books, but I particularly recommend her brand new book Discover to Deliver (with Mary Gorman). The book introduces an ‘Options Board’ with 7 product dimensions: user, interface, action, data, control, environment, and quality attribute.

Don’t be blue
We are all affected by the Mizuiro effect, and our requirements models are too. I try to turn it into my advantage by combining multiple complementary languages. How do you deal with the Mizuiro effect?

Advertisements

Too much detail?

October 28, 2012
Have you ever asked yourself (or a colleague) how detailed the requirements should be? It is a question I get asked frequently, and one I ask myself quite often. The correct answer is: “it depends”. In this post I hope to provide some answers that are a bit more actionable than “it depends”. Here are some considerations to help you work out the correct level of detail.
 
What is the next step?
The ‘right’ level of detail largely depends on what the requirements will be used for, i.e. what the next step in the process is. Obvious? Sure, but nevertheless often overlooked. Some examples of requirements uses are:
  • as input for the initial business case & subsequent go/no-go decision;
  • as a basis for COTS software selection;
  • to make a size estimate based with function point analysis;
  • as a guideline for yourself when designing an app;
  • to discuss with the product owner before the next sprint;
  • to provide to external parties as part of an EU public online tender;
  • as input to a test strategy workshop.
Looking at the above list, it is clear that they don’t all call for the same level of detail.
 
Who will use this?
In a similar way to “what is the next step?” the right level of detail may vary depending on who is going to use these requirements. How much or how little does that person know about the business domain? How much time is (s)he going to be prepared to spend on reading and understanding your requirements? Does (s)he have a different cultural background, and if so: how does that affect the way they interpret your requirements?
 
When is this needed?
One great thing about Agile is the renewed focus on just-in-time delivery of requirements. Requirements change, so if you write them down long before the solution is needed there is a chance that the requirement has changed before the solution is delivered. The more detailed the requirement, the more likely this is. It is sensible to delay writing the details until they are actually needed.
 
There is catch to this: when you think you need the details may not be when you actually need them! Details sometimes come with nasty surprises (e.g. unanticipated complexity, significant architectural impact, etc.). Why do you think someone coined the phrase “The devil is in the detail”?
 
Do the stakeholders care?
If you are getting into some nitty gritty details and wondering if these are relevant requirements or just arbitrary solution suggestions, congratulate yourself: you should be! That doesn’t mean they aren’t needed, it just means you are right to want to check. Try to find out if the stakeholders care about those details. If they do (I mean, if they really do care), then you are probably still specifying requirements rather than solution suggestions.
 
For example, if the stakeholder is requesting a specific colour you may think that is trivial or irrelevant, but perhaps the colour is required to comply with a standard.
 
What is the impact of too little detail?
Too much detail is sometimes the result of fear: fear that you’ll get a ‘solution’ that meets the requirements, but doesn’t meet your needs. This is a valid concern, yet the remedy can be worse than the disease. To overcome the fear you might ask yourself: if I leave out further details in this area of requirements, what could go wrong? Is that bad? Is it worse than spending an extra couple of days writing detailed requirements?
A large fear of getting the wrong solution could be an indication that you don’t trust your supplier. Is that fair on your supplier? If yes, why not select a different supplier, or a different way of working together?
 
Consider the rationale
There are many more considerations that can help determine the right level of detail for your requirements, but the ones in this post should be a good starting point. I’d like to wrap up with a technique that I sometimes use when reviewing requirements. It doesn’t help avoid too detailed requirements being written, but it can help remove them before the people in the next step of the process have to deal with them.
 
The technique is quite simple: find a suspiciously detailed requirement and ask why that requirement is needed. (As an alternative, if the requirement has a rationale, then use that as the answer to the why-question.) Next, consider what would happen if you replace the requirement with the answer to the why-question. If that is sufficiently detailed, then it is probably better than the original requirement.
 
I won’t go into more detail here, but if this raises any questions please contact me!

Using context to reduce ambiguity

August 22, 2012
Words derive meaning from their context. The meaning provided by the context can reduce ambiguity. For requirements, reducing ambiguity is a good thing, so it pays to keep requirements and their context together in some way.
Compare these 2 sentences:

1. There is a 4 mile traffic jam on the A2 in the direction of Amsterdam.

2. There is a 4 mile traffic jam on the A2 in the direction of Utrecht.

Now take a few minutes to answer the following question:

Which of the sentences is the least ambiguous?

If you are not familiar with Dutch topography and the Dutch road network, you can still make relevant observations. For example:

  • Both sentences fail to specify the direction of the traffic jam unambiguously. An unambiguous specification of a direction requires either a compass bearing (“in southerly direction”) or a from – to construction (“from Utrecht to Amsterdam”). 
Since this applies to both sentences you could be tempted to conclude that they are equally ambiguous. However, anyone who regularly commutes to Amsterdam on the A2 (like myself) and anyone else with sufficient knowledge of the Dutch road network knows that the A2 ends (or starts) at Amsterdam. Thus, the phrase “in the direction of Amsterdam” in sentence 1 can only mean “in a northerly direction”. Given the context, you would conclude that sentence 2 is more ambiguous than sentence 1.

Requirements need context too

In the traffic-jam example we saw that context can help reduce ambiguity. (Not always though, as was the case with the second sentence.) The same is true for requirements: providing context can help reduce ambiguity.

If your requirements are nothing more than a list of “shall-statements”, then you have no context to help reduce potential ambiguity. You are making it much harder for yourself than necessary!

What is the context for requirements?

The business environment is typically the context that is needed for requirements. This could be the departments goals and strategy, their products or services, the types of customers they serve, local rules and regulations, business processes, the personnel and their jobs, skills and cultural background, the operating environment – any or all of those things could be relevant context. Even requirements themselves can provide context to other requirements.

There are many different ways to make sure the requirements and their context remain connected: clustering related requirements together, maintaining traces between requirements and context elements, visual techniques (rich pictures, context diagrams, object models, virtual windows etc.), using requirements attributes, document layout (such as headers, sections, indentation),  user story templates, a guided tour of the office – just to name a few.

Which technique(s) you use may vary – it depends on stakeholder preferences, available tooling etc. The key thing is that you provide the relevant context in some way or other!

But watch out!

Remember that we are not striving for 0% ambiguity (or 100% unambiguous-ness?). The initial question should not be translated to “Which requirement is the least ambiguous?”, but to “Are these requirements sufficiently unambiguous for the intended purpose and taking into account the knowledge of the involved parties?“.

Requirements for reports

July 31, 2012
Requirements are not all the same. Functionality can be captured with e.g. user stories or use cases, but these techniques are less suited to some other aspects such as quality and constraints. One area that seems to get little attention is reports. How do you collect report requirements? And what is a good way of specifying them concisely and clearly?

How do I start collecting requirements for a report?
Stakeholders may want lots of different reports, and it seems new report types are needed on demand. Before you know it, you’ll be spending huge amounts of time writing requirements for those reports. That is not neccessary: determine which reports are the most import ones and do those first. After you’ve done those, re-evaluate: how many more reports must be specified, if any?

To figure out which reports are most import, apply DAD. DAD stands for Decision-Actor-Data and is a simple reminder of the logical order to collect relevant information for report requirements.

  1. Decision: The key to good report requirements is to start by figuring out which decision or action the report supports. What needs to happen based on the report? “To be informed” is not a valid answer. Either something tangible or practical happens because of the report, or you don’t need it.
  2. Actor: The next element to pursue is the actor. Who is going to make that decision or take that action? In practice, the decision and the actor are closely related, so they are elicited at almost the same time. Often an actor will be a stakeholder in the project and say ‘I need this information in this format!’ Your job is to focus them on the decision, and then ask who else might make similar or related decisions. You can find other decision-actor pairings by looking at the processes that are triggered by the report, or the processes for which the report is an outcome.
  3. Data: At first you should ignore most data elements and all presentation and formatting. Only after you have determined the decision-actor pairings should you start to figure out the categories of data that the actors need for their decision-making.  Usually it is sufficient to list the categories of data (e.g. ‘customer address data’, ‘monthly invoice totals’ etc.) and leave the details to a later stage (e.g. during design).
Which details must I capture?
When determining what is needed for a particular report, I typically use the following questions:
  1. When (e.g. ‘after process step X’) or how often (e.g. monthly) is the report needed?
  2. Is the information in the report sorted? If so, by what? Ascending or descending?
  3. Is the information in the report filtered or selected? If so, how? Can the user choose the selection?
  4. Does the report have a header? If so, which information should be displayed there?
  5. Does the report have a footer? If so, which information should be displayed there?
  6. Is the information in the report body grouped or aggregated? If so, how? Are any calculations needed?
  7. How much data (e.g. pages, records, rows) does the report typically contain? How quickly must it be produced: within seconds, minutes or hours?
  8. Confidentiality level of the data
Remember, you don’t have to get this information for all of the reports. Do the most import ones, and if you’ve noticed a few unusual actor-decision pairs you may want to spend a bit of time to investigate those reports too. These unusual reports may lead you to different categories of data that the system must process. For the other reports, you can probably get away with a generic statement like “The system shall provide 20 simple reports similar to report A and 5 complex reports similar to report B.” You can then work out these reports at a later date. Another option is to consider a ‘report generator’ functionality.
How do I write the requirements concisely and clearly?
For those reports that you have worked out in detail, you’ll have collected quite a bit of information. It can be tricky to write it all down in an understandable and coherent way. I’ve seen people try to specify a report with lists of “shall-statements”, and it is not pretty! Of course, there is no 1-size-fits-all solution for this. The presentation of requirements depends largely on the audience: the stakeholders and designers or suppliers. They can have very different preferences. An approach which has worked for me is to combine a tabular format for DAD and virtual windows for the layout.
Here is a partial example of the tabular format for a report requirement:
Requirement Id 1234
Why (Decision or action supported) Check which patients may be contaminated by a specific outbreak, for containment purposes.
Used by (Actor) Ward Nurse
When / how often Infrequent. Only if a suspected outbreak of a dangerous contagious disease is being investigated.
User selection Patient Id
Which data
Section 1: List of wards + dates
Section 2: For each ward: list of Patient Name  & Contact details
Sorted by Section 1 sorted by date
Section 2 sorted by date
Grouped by n/a
Filtered by Section 1: Show only data for the selected patient (i.e. the infected patient)
Section 2: Show only data for patients who were in the specified ward at the specified date (i.e. who were in the same ward as the infected patient).
Limits Maximum ward size is 50 patients.
Available within Max. 5 minutes
Note that in the above example I’ve left out most attributes of the requirement itself, such as owner, submitter, status and priority.
And below is a partial example of virtual windows to go with it; it shows the header, section 1 and part of section 2.
Conclusion
When collection requirements for reports, first focus on the decision which the report supports, and the actor(s) that use it. Only after you’ve verified that the report is relevant should you start to collect data requirements for the report. A combination of a tabular format with a virtual window may be a good way of presenting the requirements to stakeholders and designers.
What are your tips for collecting and presenting requirements for reports? I’d love to hear!

42 SMART requirements

June 8, 2011

Congratulations. You have painstakingly crafted 42 perfectly formed, unambiguous requirements. They are as SMART (Specific, Measurable, Attainable, Relevant, Time-bound) as can be. A job well done!

Or is it? I think it is highly unlikely to be a good job.

Even leaving aside the discussion you can have about “Relevant” (such as: who decides if it is relevant; relevant in which context. etc.) there are many questions you could, and should, raise. Here are just a few that spring to mind:

  • Are these all the relevant requirements? Each requirement on it’s own may be relevant, but how many other relevant requirements are there that you missed? If the answer is “yes, these are all the relevant requirements”, follow up with: how do you know that these are all the relevant requirements?
  • Are these requirements consistent with each other? Each requirement on its own can be fine, but put together they could be nonsensical or contradictory
  • What is the purpose of these requirements? They could be SMART enough for an initial go/no-go, but useless for the Elbonian software company that needs to develop the solution. For starters, they are not in Elbonian! (or is it Elbonese?)
  • How do these requirements relate to each other? If this is a set of requirements, they must have some kind of relation to each other. They could be at different abstraction levels (parent-child relations, such as: “There’s a house at the top of a tree” – “In the house there is a room” – “In the room there is a chair”.), some of them could be related chronologically (“First check that the road is clear” – “Then cross the road”), one requirement could constrain another (“Exterminating may only be done by Daleks”), and so on. These relationships had better be clear, or the developers could interpret them differently.
  • How important and how urgent is each requirement? If we can only finish 30 of them within time and budget, which ones would you rather have? Do your boss and your neighbour’s wife agree?

I’ll leave it at this. It’s far more fun to think up your own questions, and I’m sure you’ve got the gist by now: you need at least 43 SMART requirements!

Interviews for eliciting requirements – what is different?

June 21, 2010

During my career (if you can call it a career – maybe I should say “during my working life”) I’ve conducted many an interview. The purpose of some of those interviews was to elicit (collect) requirements. I didn’t think there was much of a difference, until I started teaching other people to hold interviews for requirements elicitation. That there really is a difference became even more noticeable when I got someone to do two different kinds of interviews in a row.

To find out for yourself, try out the following:
Get someone to interview you for 10 minutes with the aim of being able to draw a floor plan of your house (just 1 floor is more than enough in most cases). Observe what happens. Then ask that same person to interview you for 10 minutes with the aim of collecting requirements for your next house. Again, observe.

What differences might you see?
With a moderately skilled interviewer, I’d expect so see a few noticeable differences such as:

  • More detailed questions for the floor plan than for the requirements.
  • For the floor plan the interviewer is more likely to draw a picture during the interview, and use his or her analysis of the evolving picture to guide the interview.
  • More frequent checks to verify understanding for the floor plan, due to the specific and tangible nature of the subject.
  • For the floor plan the interview will be less fluent, as the interviewer regularly takes time to analyse his/her notes.


Are these differences necessary?

I think not. I would hope that in a good requirements interview the interviewer also uses pictures and simple models for analysis and verification, etc. In other words, the interviewer doing the floor plan interview exhibits desired techniques which would be equally usefull for requirements elicitation and analysis. I guess these differences could be due to the more tangible, specific nature of an existing, well known house versus vague ideas of a possible future house.

There are some other differences as well, which I think are necessary:

  • When eliciting requirements, you would be interviewing a stakeholder representing a stakeholder group. It is important that the stakeholder provides answers which reflect the consensus of the group he or she represents; these may sometimes be different to their personal preference.
  • The answers of the stakeholder should be in line with the mandate of the stakeholder group. While it may be interesting to know what the representative of the marketing department has to say on the subject of compliance, for example, but if this subject is the responsibility of the risk & compliance department you should not rely on the answers from the marketing rep.

What are your experiences with interviews – in particular interviews to elicit and analyze requirements?