New and Improved User Story Lifeycle Diagram — Free Creative Commons PDF download!

I had a designer friend update my User Story Lifecycle diagram, and she did a fantastic job!  You can download the PDF here:  http://www.scrumcrazy.com/lifecycle

New and Improved Diagram:

UserStoryLifeCycle_final_lg

The Older Diagram(also still available at the above link):

UserStoryLifecyclexm

Other Good User Story Links

Dealing with Hard to Find Bugs (Sprint Killers) in Scrum

This question was asked in an online forum(I’m paraphrasing):

> How do people here handle the impact of difficult errors/bugs (but not legacy bugs) on sprint progress?  Like ones that take weeks to solve?

In my professional opinion, the answer is: we make them transparent and try to improve upon them — at Daily Scrums, Sprint Reviews, and Sprint Retrospectives.

I tend to coach teams to handle bugs in Scrum using the Bradley Bug Chart.

One of the aspects of the Bradley Bug Chart is that bugs like the one mentioned (i.e. non legacy bugs) end up on the Sprint Backlog.  Because they end up on the sprint backlog, if one is using Story points and velocity, no story points are assigned and no velocity is gained from fixing bugs.  This, once again, helps provide transparency on to the lack of progress that the team might be making due to bug fixing.  The truth can be a hard pill to swallow, but the truth will also help set you free from these mistakes in the future.

The transparency should help all involved understand that there is something that needs improving, that is dragging down the team’s ability to produce new features and working software.  I would argue that this is not a sprint killer.  It is simply a fact of complex software development.

The real issue comes down to this:  Scrum transparency is trying to tell your team something.  What is it trying to tell your team?  What is your team going to do about it?

Related Articles

ScrumCrazy.com update:

  • Looking for Agile/Scrum/Kanban Coaching or Training?  Contact us for more info.  We have some good specials going on right now, but they won’t last long!
  • Finally, a Scrum certification course aimed at ALL members of the Scrum team! Developers, Testers, Business Analysts, Scrum Masters, Product Owners, etc.  Feb 28th in the Denver Tech Center.  More info and sign up here!

I’m Giving a Free Global Webinar this Wednesday on “Acceptance and Story Testing Patterns”

I just wanted to send a quick note to my followers to let you know that I’m giving a free global webinar this Wednesday on “Acceptance and Story Testing Patterns.”

Here is the abstract for the presentation:

Acceptance Testing, also known as Story Testing, is vital to achieve the Agile vision of “working software over comprehensive documentation.” It’s very important that acceptance tests are easily automated, resulting in a phenomenon you may have heard of, called the “Agile Specification.” In this presentation, we’ll discuss eight different patterns of expressing acceptance tests so that they are easy to execute and automate. We’ll talk about popular patterns like Given/When/Then and Specification by Example, as well some other patterns you’ve probably never seen. Attendees will participate in an interactive exercise that will allow them to apply the most frequently used Acceptance Testing patterns.

You can sign up for the free webinar here:

http://tinyurl.com/d4d6ehw

Handling Non Functional Requirements in User Stories and Scrum

Handling non-functional requirements in User Stories can at first seem difficult, but as it turns out, there’s a pretty easy way to handle them.

For performance requirements and many other non functional requirements(NFR’s), one can use constraints and stories. What I usually coach is to create a story to document the NFR and define story tests for it. Then, I suggest adding the story tests as a “constraint.” A constraint is something that all implemented stories(features and functionality) must comply with. If you’re using Scrum, then you’ll want to add something like this to your Definition of Done(DoD): “All stories must comply with all of the story constraints.”

Example

Step 1: Identify and quantify the constraint and put it in terms that your users and business stakeholders will understand.

Story Title: System response time

  • Story Test #1: Test that the system responds to all non search requests within 1 second of receiving the request
  • Story Test #2: Test that the system responds to all search requests within 10 seconds of receiving the request

Some things to keep in mind:

  • If you cannot quantify the story in concrete terms, this should be a bad smell that usually indicates a requirement that is too vague to be implemented. Vague NRF’s have the same problems that vague functional requirements do: It is hard to answer the question “How will I know when this story is correctly done?”
  • Be sure not to specify a technical solution or implementation in the story, because stories are about “The What”(“What the user wants”) and they are not about “The How” (“How this is implemented”).
  • Plan, estimate, split(if necessary), and implement this story like all other user stories, as part of the Product Backlog(Scrum).

Once this story is complete, the entire system will be in compliance with this particular constraint.

If your constraint is not system-wide or far reaching:

  • Just add it as a story test for that story. But again, specify the requirement, not the implementation, in terms the business stakeholders will understand.

The decision to create a constraint or not will rest on whether the constraint should be taken into account in at least several future stories(or system wide). If it will apply to several future stories, then create a constraint. If it won’t apply to several future stories, then just add the NFR as a story test to the stories that it applies to, or create a separate story to comply with the NFR in the small part of the system that requires it.

Step 2: Add the Story Tests to your list of constraints (and to your Definition of Done if you’re doing Scrum)

Publish your list of constraints(and/or DoD) somewhere that is highly visible. Even if you keep your constraints electronically, print them out in large print and post them somewhere on your Scrum board or in your team area.

Constraints
  • Test that the system responds to all non search requests within 1 second of receiving the request.
  • Test that the system responds to all search requests within 10 seconds of receiving the request.
  • Test that the system logs a user out after 10 seconds of inactivity and redirects their browser to the home page.
  • Test that any update to a person’s payment information(anywhere in the system) is logged to the payment_preferences log, along with the following information:
    • IP Address of logged in person
    • Old preference value, new preference value
    • Date/time of change
  • Test that any time a person’s credit card number is shown in the application, that only the last 4 digits display.

A note about Story size estimating:
Once a new constraint is added to the system, any stories in the product backlog that will have to comply with this constraint may need re-sizing if there is material time required to comply with the constraint. Said another way, all estimates for future stories will need to take into account the fact that the constraint must be complied with in order to call the story “done.”

If you’re doing Scrum, then add the constraints to your Definition of Done.

Definition of Done
  • All stories must comply with all of the story constraints<link to constraints page on wiki>.
  • All code must be peer reviewed within 4 hours of checkin.
  • If a change is made to the web services interface, the change must be documented on the official web services api wiki page<link to api on wiki>.
  • All code must have automated testing that is consistent with the “Automated Testing Guidelines”<link to guidelines on wiki>
  • Any change in of functionality that is visible in the GUI must be at least tested manually(automated tests also acceptable) against the integration environment before making the functionality available for a QA review.

Another note about Story size estimating:
Like I said above for the constraints, the Definition of Done should always be taken into account when sizing user stories. It might help to bring a copy of your DoD to your grooming and planning meetings to remind developers what all is included in their estimates.

Related Articles

A Visual Diagram of the User Story Life Cycle

This blog post is now deprecated.  Please see the new updated blog post:

http://scrumcrazy.wordpress.com/2013/06/13/new-and-improved-user-story-lifeycle-diagram-free-creative-commons-pdf-download/

 

My Preferred Agile, Scrum, and XP Resources

If you’re printing this post, it can be found online at: http://www.scrumcrazy.com/My+Preferred+Agile%2C+Scrum%2C+and+XP+Resources

A friend recently asked me this question:

What would you recommend in terms of the best book(s) to learn about Agile (Scrum) with XP practices? That is, if you had a team of developers who were newbies to Agile, Scrum, and XP, what books/articles would you give them to bring them up to speed on what they should be doing and how they should be doing it?

This question from my friend is a very tricky one, in that it is very broad and generic, and my friend gave me no extra team or organizational context to go on, so about all I can do is give a generic answer, and that is what I’ve done below. If you’re looking to combine Scrum with XP practices, be sure and see Kniberg’s book under “Scrum” below.

Don’t have time to read all of these? Well then, read the first couple from each category, and then continue working your way down each list.

My Preferred Resources

All are in order of my personal preference in each category.


Scrum

  1. The Scrum Guide (Must read for all)
  2. Deemer, et al. “The Scrum Primer”
  3. Cohn’s _Agile Estimating and Planning_ (Must read for Scrum Masters)
  4. Pichler’s _Agile Product Management…_ (Must read for Product Owners)
  5. Cohn’s _Succeeding With Agile…_ (Must read for Scrum Masters once they have a few Sprints under their belts)
  6. Kniberg’s _Scrum and XP From the Trenches_ (Note that there is a free PDF download of this book if you register with InfoQ – something I recommend anyway)
  7. Derby/Larsen’s _Agile Retrospectives_

XP (Extreme Programming)

  1. Jeffries’ “What is Extreme Programming?”
  2. Jeffries’ _Extreme Programming Installed_
  3. Koskela’s _Test Driven…_
  4. Martin’s _Clean Code_
  5. Feathers’ _Working Effectively With Legacy Code_
  6. “The Rules of Extreme Programming”
  7. Wiki entry on XP Practices

Agile/XP Testing

  1. Summary of Lisa Crispin’s Presentation to Agile Denver on Test Automation
  2. Cripin’s “Using the Agile Testing Quadrants”
  3. Crispin/Gregory’s _Agile Testing_
  4. Crispin/House’s _Testing Extreme Programming_
  5. Cohn’s “The Forgotten Layer of the Test Automation Pyramid”
  6. Osherove’s _The Art of Unit Testing_

User Stories (which originated in XP)

  1. My “User Story Basics” article and all of the links at the bottom of that article
  2. Cohn’s _User Stories Applied_
  3. Cohn’s _Agile Estimating and Planning…_ (Chapter 12: Splitting User Stories)
  4. Lawrence’s “Patterns for Splitting User Stories”

Special Agile Topics (if applicable)

  1. Deemer’s “The Distributed Scrum Primer” (If some of all your team is remotely distributed)
  2. My article entitled “The Role of Managers In Scrum” and all of the links at the bottom of that article
  3. Larman/Vodde’s _Scaling Lean Agile…_ (If your Agile transformation involves a very large organization)

User Story Basics – What is a User Story?

What is a User Story? I’m glad you asked!

First of all, it’s important to say that User Stories are not a part of Scrum as defined in the required practices in the Scrum Guide. User Stories are but one way to represent Product Backlog Items in Scrum, and while it is the most popular method used, it is not the only method. Still, though, I would like to remind you that the User Stories practice is totally independent of Scrum, and thus it is not defined by Scrum. As such, everything else in this post is about the User Story practice and not Scrum itself.

Beware the common misconception!

There is a common misconception in the industry that a User Story is a sentence like:

  • As a <user> I want <some functionality> so that <some benefit is realized>.

THIS IS NOT A USER STORY!!! This is the biggest User Story trap in existence! See Trap#1 and #8 of my article on User Story Traps.

What is a User Story?

<Definition>
A user story describes functionality of a system that will be valuable to a Non Development Team(NDT) stakeholder of a system or software. User stories are composed of three aspects:

  • a written description or short title of the story used as a token for planning and as a reminder to have conversations
  • conversations about the story that serve to flesh out the details of the story
  • acceptance tests that convey and document details and that can be used to determine when a story is complete

</Definition>

What do they mean by Acceptance Tests?

Typically, in the context of a User Story definition, we mean tests that are represented by conversations, textual descriptions, tables, diagrams, automated tests, and so forth. When these Acceptance Tests are applied to the completed, implemented User Story, all of the tests should pass, and will thus prove that the story has been implemented correctly. If some functionality was not covered in a User Story acceptance test, then it wasn’t a requirement for that particular User Story.

Technically, in the context of a User Story definition, an acceptance test need not be automated or implemented. At the minimum, it should be described conceptually. The test should then be executed in order to prove the story and get acceptance, whether that be a manual or automated process. If your conceptual acceptance tests are described by one or more automated tests, then that is generally a much better practice, but not absolutely required.

Acceptance Tests should be automatable about 90+% of the time, though again, it is not required that they be automated. Having said all of that, when teams strive for development speed and quality, very few get far along that road without automating a large portion of their acceptance tests.

Acceptance Tests, in the context of User Stories, are also sometimes called Story Tests, Acceptance Criteria, Conditions of Satisfaction, and Test Confirmations.

Ron Jeffries, one of the co-inventors of User Stories and Extreme Programming (where the User Story practice comes from), has a good article that also describes User Stories in a basic way.

When do these Conversations and Acceptance Tests get created?

Typically, this happens in weekly Product Backlog grooming(also known as a Story Writing Workshop, Story Grooming, etc) sessions, but can also happen informally. The most effective backlog grooming includes some stakeholder/user representatives, the entire development team, and a Product Owner (Scrum) or Customer(XP). These sessions happen weekly and usually last 1-2 hours each. The goal of the sessions is to get stories that are “ready”, meaning the team has a shared understanding of the Acceptance Tests, and has the vast majority of the information they need to implement(code) the feature. See What does Product Backlog Grooming Look Like? for more on that topic. Keep in mind that sometimes a single User Story will be discussed in 2-3 grooming sessions before it is “ready”, especially if there are open questions or complex logic involved.

Frequently Asked Questions

Should I use a User Story to represent bugs/defects in a system?
The short answer is “it depends.” If it is a legacy or deferred bug, then yes, and it should end up on the Product Backlog(story points assigned). If it is a bug that was introduced since Scrum/Agile was put in place, then no, and it should end up on the Sprint Backlog(no story points assigned). See One way to handle Bugs and Production Support in Scrum for the longer answer.

Where do I get more info?

Handling Scope Changes Mid-Sprint in Scrum

The first thing about handling scope changes mid-Sprint is to recognize what type of scope change it is.

Bug or Production Support request?

If it’s a bug, or a production support research request, then my preferred method is to use One way to handle Bugs and Production Support in Scrum . As it says in that article, I hope you don’t need that chart. If you’re one of those teams where such bugs and production support requests are very rare (say, on average, once or less every 2-3 months), I’d say just do it and you can choose whether to make it a Product Backlog Item or put it on the Sprint Backlog. You’ll probably lean towards PBI if it’s a big thing, or put it on the Sprint Backlog if it’s a small thing.

Scope Change to PBI in Progress

If it’s a scope change to a Product Backlog Item in progress, my hope is that this means a new or changed acceptance/story test of some sort. If you’re not practicing Acceptance Test Driven Design, you should be! For you non ATDD types, the old school terminology for this is a “requirement change.” I’ve been around the block a few times coaching Scrum Teams on this scenario. My best advice is this:

  • If the change in scope is likely to increase the originally estimated size for the story by more than about 10%, then the change should be a new Product Backlog Item by itself. You may need the whole team to re-estimate the newly changed story.
  • If it is less than about 10%, then just change your acceptance tests, do it alongside the current PBI, and move on with life.

Swapping in the new PBI

If the scope change does result in a new PBI, then in rare cases where it is strongly warranted, a Scrum Team should be flexible enough to swap that PBI in and do it in the current Sprint. However, this usually means some other PBI will have to be swapped out of the Sprint as well. If these kinds of “swaps” begin happening regularly, then your team needs to do a root cause analysis on the swaps in the Retrospective.

  • In this scenario, don’t forget that the new, urgent PBI needs to be groomed, sized, and tasked out on the Sprint Backlog. Get all of your team together and you can usually do this in a matter of minutes.

Story Testing Patterns – My Recent Presentation at Mile High Agile

You can now find my recent presentation(along with all the handouts, etc), “Story Testing Patterns” on my website here:

http://www.scrumcrazy.com/Presentation+-+Story+Testing+Patterns

Feel free to add any comments here on my blog.

.

Scrum Strategy – The Dev Team Improvement Backlog

The strategy I describe below is one I’ve used successfully in coaching Scrum teams. Your mileage may vary, of course.

Overview

  • The Dev Team has a thing called the “Improvement Backlog”, where it keeps a list of things(Improvement Backlog Items — IBI’s) that are intended to improve the Dev team’s productivity.

Purpose

  • The purpose of the strategy is to create a culture of self organization and “inspect and adapt” improvement within the Dev Team.
  • The strategy also acts as a platform for allowing the Scrum Team to resolve technical debt, though technical debt is just one type of potential improvement. As an aside, bugs are not technical debt. Bugs should be handled differently. See more about technical debt under the “Considerations” heading below.
  • This strategy is an implementation of the Improvement Community/Improvement Backlog concept that Mike Cohn talks about in Chapter 4 of Succeeding with Agile. More on that below under the “Other Notes.” heading.

Preconditions

  • The Dev Team has enough flexibility from management and/or the Product Owner to carve out a small amount of time each sprint for productivity improvement. The rule of thumb I use is 10-20% of each sprint. If you have trouble convincing management, you might try asking management how much they know about the 7th habit of highly effective people.
  • The Dev Team is not in the habit of creating new technical debt, testing debt, or process debt. You need to stop the bleeding before you can stay current and catch up. This strategy is about staying current and catching up, so don’t use it until your team has stopped the bleeding. For help with stopping the bleeding, have your Dev Team add “creates no new technical, testing, or process debt” to their Definition of Done(DoD) and work on that strategy first. If you already have a good DoD and your team faithfully adheres to it, you probably don’t need that statement.

Strategy

  • The Dev Team has a thing called the “Improvement Backlog”, where it keeps a list of things(Improvement Backlog Items — IBI’s) that are intended to improve the Dev team’s productivity. (Note that I did not say velocity, as I do not believe velocity to be a direct measure of productivity, and I believe that software productivity can’t be directly measured). This backlog has many different things on it, and it is ordered by the Dev Team. The types of things are essentially anything that *might* have the chance to improve team productivity, while at the same time allowing the team to keep a sustainable pace. I say “might” because we want to create a culture where it is safe to experiment.
  • Examples of IBI’s:
    • Team celebrations
    • Scrum process improvements
    • Technical Debt resolution
    • Bug Fixing (rare, but if it can improve Team productivity it’s ok)
    • Exploring new technologies
    • Attending Conferences/Training
  • Each item is groomed, estimated^1, and ordered, just like PBI’s.Further, it’s best to break these down into smaller items that can be done incrementally, when possible. The Dev Team can use the retrospectives to feed and groom the IB, or any other time (formal or informal) to add to it. Like PBI’s, only the ones towards the top of the backlog are well groomed. The Dev Team can choose to invite the PO into the grooming activity if they so desire, but it is not required. The IB is ordered based on many factors, but the Dev Team is responsible for maximizing the productivity value of the IBI’s to the team and the wider organization.
    • ^1 These items should NEVER be estimated in story points, NOR counted in velocity, and these items should not end up on the Product Backlog. I prefer hours to estimate them, but any other unit (real or fake) can be used to do relative sizing of the items, so long as the unit is not confused with PBI’s, User Stories, or velocity — which are entirely different concepts.
    • These items should never represent code or testing around Product Backlog Items. If you need to improve the code or testing around the functionality of a PBI, then incorporate the “clean up as you go” time(also known as the Boy Scout Rule ) into your estimate for the product backlog item. IBI’s are specifically for things that are not directly related to PBI’s.
  • How much time do we dedicate to the IBI’s each Sprint?
    This is entirely up to the Dev Team, but made visible to the PO and the wider organization. Making them visible on a team’s “Scrum board” is sufficient. These items can be planned in the Sprint Planning Meeting, or outside of it, but they should probably end up as part of the Sprint Backlog, just like PBI’s do. I find that the best way to dedicate the time is to negotiate with the PO on some number between ~10-20% of each Sprint. The Dev Team should let the PO influence that number, but not control it. Also, there may be circumstances where the team decides to use a much higher or lower % of the Sprint, but the team had better be darn sure that they can justify the outlier to the PO and wider organization.

Considerations

  • In my opinion, this strategy is quite useful for almost all teams that meet the preconditions mentioned above.
  • This strategy should never be used as a “dumping ground” for deferring work of any kind. See the precondition above about “no new debt.”
  • Some teams execute this strategy by putting these items on the product backlog. I don’t like this for numerous reasons. The most important reason I don’t like it is because it incorrectly gives the PO authority over something that is clearly the domain of the self organizing Dev Team. The PO owns the “what features” part of Scrum, and the Dev Team owns the “how we deliver features” part of Scrum.
  • I have found that this strategy is a huge morale booster to Development Teams and really increases the self organization aspect of the Dev Team.
  • It is very difficult to measure productivity increases in software development, but anecdotally I have observed significant productivity improvements when this strategy is implemented.
  • Resolving technical debt is a tricky thing. Most people define technical debt incorrectly, in my opinion. I essentially subscribe to Martin Fowler’s view of Technical Debt as a definition. It’s hard to predict the value of resolving technical debt, and many believe that you should never resolve technical debt unless it is directly related to a product backlog item being implemented. As such, I encourage teams to prioritize technical debt inline with any other items on the Improvement Backlog that could improve their productivity. This tends to lead towards fewer technical debt items being worked on, so it lowers the risk of doing non-valuable work. At the same time, it leaves open the possibility that the Dev Team really does believe that resolving technical debt is an improvement worth pursuing.

Other Notes

Follow

Get every new post delivered to your Inbox.

Join 266 other followers

%d bloggers like this: