Sunday, October 23, 2011

Velocity Assumptions: Take them into account when using it.

Using story points and velocity have been practices for Agile processes for some time now. Velocity is observed as:

(A) velocity = story points completed in an iteration.

By using velocity we calculate the remaining duration as:

(B) number of iterations left = remaining story points / velocity

If you look at the formula above, one thing should jump at you: time is the only known factor. Story points were estimated and are not really known.

The formulas above came from simple physics calculations where velocity is space divided by the time it took to traverse it.

To continue with the analogy, imagine a traveler trying to guess how long it will take to reach a destination.She doesn’t have a map or other instruments to know the distance between where she is and her destination. She estimates the distance the best she can, establishes a few milestones and measure the time it takes to travel from one to another. She doesn’t have an accurate measurement of how far she traveled, and uses her initial estimate to calculate velocity. Would you trust her Estimated Time to Arrival? That’s what we do in software.

To be fair, using velocity and story points might be the best we have to put forward, but be aware of the number you are using; It is still an estimate.

Another few assumptions and weaknesses you should take into account are below.

1. Estimates hold their proportionality.

The first thing we try to make sure when using story points/velocity for estimation is to make sure that a five point story is a little bit larger than twice a two point story.

During a sprint or iteration you might find that given the established velocity, a story that was estimated as a 2 pointer should take only 3 days, but it ends up taking 7. The advice is to not re-estimate the story but look at the current backlog and see if the proportions still hold. In other words, if there’s reason to believe that a 5 point story is no longer twice a two pointer, you should either raise the 2 pointers to the appropriate amount or reduce the 5 point ones to 1[1]. If you follow this advice your velocity will be skewed. The reasoning behind this advice is that the factors that led the story to take so much longer will repeat themselves with other stories, so keep them in for better estimation.

Unless we have more stories of the same type, we don’t know if the proportions still hold. The registration story might be twice the size of the log in one, but it is just a guess. In the equation above (B) we are trying to find 2 variables, duration and remaining points.

Once again, Story Points are an estimate and not really known. We think we have the proportions right, but that might change with time.

Using our traveler example, after she has traveled a day, she looks at what she went through and decides if the unknown terrain ahead would offer the same challenges she faced before. Unless she knows some of what lies ahead, she won’t be able to predict if she will be make the same distance in the same amount of time. It is fair to expect that as a project progresses, the team will be better prepared to estimate the duration of the stories.

2. Distractions/Interruptions remain the same

One other assumption is that, over time, the distractions and interruptions consume a relatively constant amount of time. This is why we calculate velocity at the end of the iteration, instead of story by story. How many times though did you have an iteration with a unforeseen 2 day whole team production issue? How often does that happen? According to the law of large numbers [2] if you collect enough measures (probably more than 20), your estimate will converge to the expected value. In other words, after you collect more than 20 velocity data points, your “interruptions” will average out and you will converge to your team’s real velocity.

We don’t have enough data points for the law of large numbers to apply. We start measuring and using velocity right away, with 2 or 3 iterations. Those data points are not enough to properly estimate velocity according to the law of large numbers. Most times we provide project cost projections without measuring velocity at all.

To make matters worse, we continue to work on process improvement so that velocity goes up. Measuring these improvements is hard because we can’t tell if velocity went up because of them or flawed estimates.

3. Teams will remain constant

This assumption coupled with the previous one is what concerns me the most. Teams do not remain constant. Team members come and go, not because they are not fully assigned, but people leave the company or feel they need a new challenge. Aside from that, people get sick, take vacations, etc.

4. Incomplete stories get no credit

It is common practice to grant no credit towards velocity for incomplete stories. This advice is to prevent the discussion of how much of the story was actually complete. Besides you will get the full credit for the story in the next iteration allowing you to “average out” the velocity.

So after all of this, what do I think you should do?

What if instead of measuring velocity at the end of the iteration, we measured it at the end of the story?

You will get a better idea if you broke the proportionality rule. In other words, if you estimated at a 2 and it took 6h, when all the other 5s took 6h you know that your 2 pointer was actually a 5 pointer and you can adjust it.

Another advantage of this is quicker convergence to the law of large numbers. Your project only has a limited number of iterations, but it is not uncommon for your projects to have more than 20 stories.

Team or skills will remain constant for a single story and the number of variations you will have for each story will be taken care of by the law of large numbers as well. Unless you bias the experiment on purpose by assigning a type of work to a single team.

If you are familiar with the Cycle Time concept you will notice that that's what I'm advocating.

The idea of velocity measured at the end of an iteration is a powerful idea, but it doesn't converge to the expected value quickly enough. It takes longer than we can afford. Be aware of the assumptions behind it in order to know how to better use it.

Using Cycle Time for estimating duration might provide you with an improved solution, but it won't give you a perfect estimate. If you really want to improve your estimates, work on stabilizing your system by reducing variations and noise (disruptions). It won't be quick, but it will be effective.

[1] Cohn, Mike. 2005. Agile Estimating and Planning. Prentice Hall. p. 61


Friday, June 24, 2011

Code Reviews and Quality

Are code reviews the best way of building in code quality? Although good practice, code reviews do not assure quality, instead it inspects it. To build in quality you need to ensure your requirements are good, give your team members time and make sure they have the knowledge they need to build the software.

Test/behavior driven development increase code quality; it drives requirements to the specification point like no other tool we have right now. If you start writing test cases that only cover the happy path, and then you will probably end up with the same quality when producing only user stories or nothing at all.

Of course, reviewing the acceptance tests with the engineers writing the code before they start is a good practice; it communicates the intent of the test cases and features.

Deadline pressure drive developers to compromise quality. Sometimes you have to compromise on elegance when the feature you are working on is now two weeks, or more, late and you're stuck on a problem that you can't seem to find a solution for. Having mandatory code reviews will aggravate this problem, not address it. Developers now have to set time aside to allow for code reviews thus compressing the time they have to create elegant solutions.

Why not eliminate deadlines? (Just kidding) Instead of estimating features, perhaps use SLAs based on past history. If small features in the past took 2 weeks to get done with 95% confidence, the chance of it being done in two weeks is 95%. This technique helps to set the right expectations with your stakeholders.

Another approach is to review the estimates, that is accomplished with Planning Poker for example.

One other reason for bad code is developer lack of knowledge about the domain, tool used, etc. Code reviews help, but instead of reducing rework it increases it. Precious resources were spent writing the code and more will be spent reworking it to conform to re-viewer's suggestions.

Pair programming might be more appropriate for this problem. Have the new developer pair with a more knowledgeable developer. If the new developer is the one writing the code with the assistance of a more experienced one, learning will probably occur in a less adversarial environment and code will come out already reviewed. No rework necessary.

Having code inspections/reviews are good, but they won't build quality in. At most it will inspect it afterwards. Code quality creates an adversarial environment where people feel threatened and defensive -- not the environment most conducive for learning.