Wednesday, September 23, 2009

Team Preservation vs. Prioritization

Keeping a team together is important, but not to the point where prioritization is hard.

Shared services teams are a necessity and they won't go away. The problem with them is that backlog prioritization is hard. You need to identify a product owner that is effectively a proxy to all the other users. This proxy's job is thankless; he or she has to make multiple competing stakeholders agree on priorities without an incentive to collaborate.

One way of solving this issue would be to group and disband teams as projects come and go. The Storm, Norm and Perform [1] theory for teams suggests that we keep teams together as long as possible and this approach would go against it.

So how would I solve this difficult problem?

I thought about:

  1. Have a proxy for clients responsible for accommodating and negotiating priorities across stakeholders.
  2. Dedicate resources or hours to specific stakeholders.

  3. Form/Disband teams as projects come and go.
  4. Create a cap and trade system similar to the one described on [2].

Lately, I have been leaning towards option 4 a lot. In short, every stakeholder is given a number of hours, or percentage of the team's time; a cap. They are allowed to negotiate their hours (or points) with the other stakeholders, but once the iteration, or some other time period starts, any points/hours that were not used would be forfeited in a use it or lose it fashion (I still have to think this one through).

This approach is not infallible. It can lead to context switching if all stakeholders decide to exercise their options in a specific, or worse, all iterations.

On the other hand, this could lead to some interesting behaviors, where certain stakeholders with medium priority projects could negotiate their caps at a premium with stakeholders that are "desperate" for completing projects. In other words, it rewards collaboration. It also allows the shared services team to measure how much a stakeholder "conceded" or not, eliminating the squeaky wheel effect [3].

Another interesting consequence is that it would encourage better portfolio management on the stakeholders’ part. I don't mean to reduce acceptance to change, but committing the team to a task/story that has low return value, would equate to throwing away money.

The cap and trade system seeks to reduce conflict resolution for all involved and give an incentive to negotiation and collaboration among stakeholders. I have not implemented such a system and have not completely thought it through, but it seems promising.

Besides all of that, it could lead to an interesting game.

What problems do you think I'm going to encounter? How did you solve this problem before? What were your good/bad experiences?

Please leave a comment below :)

[1] http://changingminds.org/explanations/groups/form_storm_norm_perform.htm
[2] http://www.americanprogress.org/issues/2008/01/capandtrade101.html
[3] http://en.wikipedia.org/wiki/Squeaky_wheel

Monday, September 7, 2009

How stable should velocity be?

If you are estimating story size instead of duration, velocity is what will allow you to determine project duration and thus cost. In such an environment, velocity is an important metric and needs to be monitored accordingly.
As I wrote back in April, we have been collecting data to see if we were improving or not our velocity. My original intent was to find what influenced velocity however, that goal became making sure that velocity is somewhat stable.
As we started to collect metrics (Total # of team hours available, # of points with clear definition of done, # of distractions and # of blocks) we started to pay more attention to velocity. It was interesting to see that velocity was varying beyond what we expected. We could always explain why velocity went up or down, but it was varying regardless. One of the issues we found was that story estimates were wrong. The team inadvertently started using a different scale that turned out to be too small. In other words, a story that used to be a 5 became a 1, so the stories that were a 1 had nowhere to go.
If you are using story points, you need to make an effort to make sure that the relative sizes of stories are valid. As you work through your stories you look for stories with disproportional sizes. A story of size 5 must be a little bigger than twice a 2.
I regularly use a room painting problem to explain story points. It works well because people will tend to estimate the time it will take to paint a room by the number of walls and their sizes, but the number of cuts (windows, doors, etc.) is what really determines how long it will take. So once we go through the exercise of estimating painting multiple rooms based on sizes, I introduce the cutting variable and thus have the team re-estimate the "project". In other words, when the team learns something new about a set of stories, they should go back to the backlog and re-evaluate their estimates.
During this effort of collecting metrics, I did create control charts for velocity, but these weren't as useful as I expected. If you just observe velocity you can already tell when there's a problem, you won't need control limits for that. Pay attention to what is contributing for your velocity to vary. If it is distractions, make sure they go away!
The participation of the team is important. They have to share your ideal that this is intended to help them achieve their full potential and that cheating the system will not help anybody.
Velocity shouldn’t swing much in a single iteration. It should be stable and somewhat predictable. If it does you have a problem and need to address it.

Thursday, August 6, 2009

Rewriting Systems and Waterfall: A Common Theme?

If batch size is the number of lines of code being delivered to production, completely rewriting systems and waterfall have one thing in common: Big Batch Sizes.

There are good reasons to rewrite your software. Like it being dependent on technology that is no longer supported [1]. The belief that the code is a mess in not a good one [2].

When you make the decision to rewrite a system, your customer will not get any benefits until your brand new system catches up to the features already available in the old one. In other words, the batch size is as large as the current system. Perhaps a little bit smaller, but not much. Similarly, in a pure waterfall project, your customers will not get the benefits until you finish it.

To reduce the batch size you could slowly rewrite small parts of the code until it gets to the desired state (Refactoring), or migrate the code feature by feature (Vertical Slice Migration). Let's call these approaches Small Releases.

Vertical Slice Migration and Refactoring aren't always attainable. Vertical Slice Migration might not be possible if the system you are replacing is tightly integrated or doesn't have an easy way to make an external call for example. You shouldn't use Refactoring if the desired state is not a natural progression from the current one. That's normally the case when you are moving platforms or languages (i.e. COBOL to Java).

Reducing the batch size doesn't reduce cost, it allows your customers to give you feedback and realize the benefits of your efforts early. Each release you make to production has test, migration and deployment costs. Likewise, the waterfall method can also be cheaper than agile if you can fix requirements upfront (From my experience this is unrealistic in most cases). If there's no need for early feedback, doing multiple releases only makes the project more expensive.

As for the risk of the change, you could argue that Small Releases incur greater risk. After all, you are making multiple changes and that increases your probability of error. I think the opposite is true. By making small changes you are in better position to evaluate the risks and take the right approaches to mitigate them. It is easier to manage an incremental upgrade than
it is a full system deployment. This is also the case in a waterfall project where by doing a big bang release, the risk is concentrated in one single event: The Release.

In summary, a big batch is cheap to produce, but the consequences if it is bad can be severe.

[1] http://www.nytimes.com/2002/05/12/us/for-parts-nasa-boldly-goes-on-ebay.html
[2] http://www.joelonsoftware.com/articles/fog0000000069.html

Tuesday, June 2, 2009

Deploy Every Iteration?

How would you change your software development process if you had to deploy to production at the end of every iteration? Do you see yourself requiring "hardening iterations"?

Deploying every iteration is not for the faint of heart. You need to ensure that the software you are releasing is solid and integrates well with the other pieces. It requires you to have your engineering practices really down, reducing the time you spend in doing the release (a.k.a. hardening iterations) as much as possible.

If you look at the activities that happen in a hardening iteration to enable a release, you generally have building the software, reviewing code, regression plus load testing and at last deploying it. This set of activities will probably grow or shrink depending on the regulatory environment you are in.

To reduce the time you spend building the software you need to automate your build process as much as you can. In the Java space you can use tools like Ant and Maven, for .Net Nant is an alternative. Maven is useful because besides just automating the build it generates a nice web site with reports about the software you're building.

Once you have a good build, i.e. no compilation issues, you need to check it to make sure it has no regressions or bugs. This is probably where automation will give you the most benefits. The shorter you can get your regression tests to take, the more time you will have to actually write code. There are plenty of tools out there that will let you automate your tests. If you are writing web applications you might want to look at Selenium. Regression tests are not the only requisite to deployment in some shops, you might also need to load test your software to make sure you still satisfy your performance service level agreements.

Finally you need a fast and easy way of deploying your application. Here I have no tool advice for you. On the bright side, if you are releasing every iteration you won't have a lot to deploy and thus it should be quick and easy. The longer you code, the more complex your deployment might be.

As you add the three automation pieces above, consider implementing a continuous integration process. It would allow you to eventually have code travel from the software developer mind to production in no time! A few free tools in this space are Hudson, CruiseControl, Continuum, etc.

In an Utopian world, every time a software developer commit a change to your Source Code Management system (you have one of those, right?), you would repeat the whole process automatically and send the software to production. I do not recommend you to do this, except the simplest of the cases!

As you can see, if you inspect your deployment process constantly, you will be able to eventually spend little to no time on "hardening" your software for deployment. Repetitive tasks should be left to a computer, not to a highly paid professional ;)

Wednesday, May 20, 2009

People Over Process and Tools

People and Interactions Over Processes and Tools. In other words: Brain over Fingers.

If Agile values people over processes and tools, why does it require so much automation? Reality is that people is good at creative tasks, so in order to really use their potential we need to free people from tedious and sequential tasks. People: We need your brains not your fingers!


Perhaps my biggest challenge implementing Agile so far has been convincing people to let go of their repetitive tasks and automate their processes. This is not always the case with software development professionals, but in general it is the case with the other "delivery functions" (QA, Operations, Project Management).

I'm not sure if it is related to people's background, but I find that QA analysts and System Engineers that were once developers are more receptive to automation and have greater success with it.

It is not hard to convince a developer to automate his build process for example. In fact, I haven't seen a manual build in a long time now. That's definitely not the case with QA Analysts and automated test cases or System Engineers and their deployment or monitoring processes.

The way I normally approach this issue is by selling the benefits to the professional in question. I show engineers and analysts how much easier and pleasurable their lives would be if they didn't have to do the same repetitive task over and over again. For example, I tell QA analysts how much more challenging their job would be if instead of executing their tests if they spent most of their time coming up with new test cases. In this case, we need QA analysts to use their brain to come up with test cases and not fingers to type or click the mouse.

By shifting the work of the QA analyst to test case development and automation we are now using their brains and not only their fingers. 

It is true that automating test cases might require as much software development skill as coding the system being tested. In cases like this, I'd have the QA analyst define the test cases and have developers automate them. I found that QA analysts are sometimes comfortable reading the test code and are willing to "audit" the automated tests to make sure they are indeed covering their intended cases.

Deployments is another area of the software development process that is ripe for automation. How much better life would be if instead of spending a few hours deploying an application in the middle of the night, System Engineers spent those hours coming up with scripts that would deploy the application in minutes? What if they spent time figuring out how to break the deployment apart enough to be able to have all of the components in production inactive ahead of the deployment? These are tasks that require brains and not fingers.

Once again, we need your brains and not your fingers!


Monday, April 20, 2009

First Cut at Metrics

As I’ve written before, we have been trying to identify metrics that will give us an idea of how much under control our process is. For that reason we decided to collect velocity, number of stories with definition of done, number of impediments per day, number of “distractions” and total number of team hours available.

Even though velocity is not really comparable across teams and is easily gamed, it is what we would like to “control”. Velocity is a measure of capacity and we would like it to improve it or at least keep it constant.

The team feels that the other metrics would be good control variables.

Total number of team hours available: the team cannot produce more if they are having fun on the beach ;). This variable is calculated by adding up the number of hours that was worked by each team member regardless if in the project or not.

Ratio of tasks or stories with definition of done: We have experienced the joy of having user stories with great acceptance tests once, and the productivity gains that comes from it (thanks Rhonda Dillon!).

I strongly believe if there’s something that positively influences productivity it is good acceptance tests, or the power of knowing what is expected of you. One problem though is that using the acceptance test moniker, might be limiting and people might get confused (What is the acceptance test for deploying the application to X environment?). That’s why we settled on definition of done instead. I could possibly spend hours talking about the benefits of creating test cases before starting the work.

This metric is calculated by dividing the number of stories with definition of done by the total number of stories.

So far we discussed the “positive” control variables, now let’s get to the “negative” ones.

The number of “distractions” is supposed to measure the time that team members spend chasing issues unrelated to the iteration goals like production problems, supporting other developers using the platform, etc. We deliberately stayed away from tracking time, since we believed it would be not as accurate and would be redundant to the company’s official tracking system. The problem is that we don’t control the time tracking system, Finance does.

The last control variable I listed above is number of impediments per day. It measures stories that might have had a clear definition of done, but were held up for some reason or another. This will tell us if we had “idle” resources (never really true) for some kind of reason.

Next challenge: Where/How to collect this information and how to present it properly?

Wednesday, April 8, 2009

Metrics and Productivity are not the same thing!

In my quest to satisfy a management request of coming up with metrics I came across Ron Jeffries fabulous article about increasing productivity.

I couldn't agree more with his perspective that increasing productivity and metrics are not exactly coupled and that it is more important to adapt what you inspect to the problems you are trying to address. In other words, the metrics that you are watching now are probably different from the metrics you’ll watch for next iteration. That is, of course, if you fixed the problem.

What brings up the question, should we put monitors on metrics that we collect? In other words, if we decided that the number of hours spent on supporting deployments is too high, should we continuously monitor it to prevent it from ever being a problem again? If the metric in question is derived from some automated process, I’m absolutely in favor of it.

Be aware though that metrics should not hinder productivity. In other words, don’t bring down your team velocity so that you can collect . Whatever you decide to measure should be derived from some automated process you run (your build process preferably).

If you are in a Java shop, take a look at Maven and Sonar. These will provide you a lot of information in a single web page.

Tuesday, April 7, 2009

Agile Metrics

We all know that metrics can be bad, specially if used for performance evaluations, but we need to be accountable as well.

So here I start my journey on how and what to measure for an agile team.

Here are a few links to start with:



ShareThis