Handling non-functional requirements in User Stories can at first seem difficult, but as it turns out, there’s a pretty easy way to handle them.
For performance requirements and many other non functional requirements(NFR’s), one can use constraints and stories. What I usually coach is to create a story to document the NFR and define story tests for it. Then, I suggest adding the story tests as a “constraint.” A constraint is something that all implemented stories(features and functionality) must comply with. If you’re using Scrum, then you’ll want to add something like this to your Definition of Done(DoD): “All stories must comply with all of the story constraints.”
Step 1: Identify and quantify the constraint and put it in terms that your users and business stakeholders will understand.
Story Title: System response time
- Story Test #1: Test that the system responds to all non search requests within 1 second of receiving the request
- Story Test #2: Test that the system responds to all search requests within 10 seconds of receiving the request
Some things to keep in mind:
- If you cannot quantify the story in concrete terms, this should be a bad smell that usually indicates a requirement that is too vague to be implemented. Vague NRF’s have the same problems that vague functional requirements do: It is hard to answer the question “How will I know when this story is correctly done?”
- Be sure not to specify a technical solution or implementation in the story, because stories are about “The What”(“What the user wants”) and they are not about “The How” (“How this is implemented”).
- Plan, estimate, split(if necessary), and implement this story like all other user stories, as part of the Product Backlog(Scrum).
Once this story is complete, the entire system will be in compliance with this particular constraint.
If your constraint is not system-wide or far reaching:
- Just add it as a story test for that story. But again, specify the requirement, not the implementation, in terms the business stakeholders will understand.
The decision to create a constraint or not will rest on whether the constraint should be taken into account in at least several future stories(or system wide). If it will apply to several future stories, then create a constraint. If it won’t apply to several future stories, then just add the NFR as a story test to the stories that it applies to, or create a separate story to comply with the NFR in the small part of the system that requires it.
Step 2: Add the Story Tests to your list of constraints (and to your Definition of Done if you’re doing Scrum)
Publish your list of constraints(and/or DoD) somewhere that is highly visible. Even if you keep your constraints electronically, print them out in large print and post them somewhere on your Scrum board or in your team area.
- Test that the system responds to all non search requests within 1 second of receiving the request.
- Test that the system responds to all search requests within 10 seconds of receiving the request.
- Test that the system logs a user out after 10 seconds of inactivity and redirects their browser to the home page.
- Test that any update to a person’s payment information(anywhere in the system) is logged to the payment_preferences log, along with the following information:
- IP Address of logged in person
- Old preference value, new preference value
- Date/time of change
- Test that any time a person’s credit card number is shown in the application, that only the last 4 digits display.
A note about Story size estimating:
Once a new constraint is added to the system, any stories in the product backlog that will have to comply with this constraint may need re-sizing if there is material time required to comply with the constraint. Said another way, all estimates for future stories will need to take into account the fact that the constraint must be complied with in order to call the story “done.”
If you’re doing Scrum, then add the constraints to your Definition of Done.
|Definition of Done
- All stories must comply with all of the story constraints<link to constraints page on wiki>.
- All code must be peer reviewed within 4 hours of checkin.
- If a change is made to the web services interface, the change must be documented on the official web services api wiki page<link to api on wiki>.
- All code must have automated testing that is consistent with the “Automated Testing Guidelines”<link to guidelines on wiki>
- Any change in of functionality that is visible in the GUI must be at least tested manually(automated tests also acceptable) against the integration environment before making the functionality available for a QA review.
Another note about Story size estimating:
Like I said above for the constraints, the Definition of Done should always be taken into account when sizing user stories. It might help to bring a copy of your DoD to your grooming and planning meetings to remind developers what all is included in their estimates.
- A User Story Checklist
- User Story Basics – The Longer Story
- To see what User Story Traps cause a lot of pain on teams, see User Story Traps
- To see how to make sure your stories are “ready” for implementation, check out Roman Pichler’s article, The Definition of Ready
- To see more info on creating Acceptance Tests, see my Presentation – Story Testing Patterns
- To see how well your team is implementing the User Story practice, see the The Bradley User Story Maturity Model
- To see more information on grooming user stories, see:
- Ron Jeffries’ article: Essential XP: Card, Conversation, Confirmation
- The best book on the subject of User Stories, IMO, is Mike Cohn’s User Stories Applied.
- I wrote a skit for a presentationI did at MileHighAgile 2011 that demonstrates a team doing Product Backlog Grooming for a User Story. In the skit, backlog grooming is covered pretty well, with the one exception that estimation was not covered in the skit. You can read download the script of the skit here.
Filed under: Product Backlog, Product Owner Tips, Scrum, User Stories | Leave a comment »