Saturday, August 7, 2010

The Most Important Role in Scrum: The Product Owner - Part II

I came across an interesting observation in a book I have been reading lately [1]. Alan Shalloway wrote "imagine having completed a software development project, only at the end to lose all of the source code".

Although a scary scenario, it illustrates well that the hard work in a software development project is not the writing of the code itself but figuring out what the software is really supposed to do.

Alan goes on stating that it would probably take less time to write the software the second time than it took the first time. I have lost source code in the past, not the whole project though, but rewriting the lost code was faster than the first time, and its design more elegant.

Nowadays losing code should not be a common occurrence because of all of the infrastructure we put in place for a project, but I'm sure that this simple observation will still resonate with developers.

So, if coding is not the bulk of time spent, then what is it? Most of the time is spent on product management activities like discovering the customer needs and on finding ways to realize those needs and not necessarily writing the code. In other words, the time is spent on minimizing the biggest risk in any project: building what the customer doesn't need.

The product owner role is not only responsible for understanding and uncovering customer needs, but communicating them to the team (communication is the second biggest risk). This will take the shape of Minimum Marketable Features in the beginning of the project, User Stories during the project and Acceptance Tests (hopefully with the collaboration of QA staff) closer to the development of software.

After two blog posts about the importance of product owner's role, I hope that this oft neglected role is a little more important in the reader's mind, and in fact, bring it to the same level of attention as the Team and Scrum Master roles.


[1] Lean-Agile Software Development, Alan Shalloway, Guy Beaver, James R. Trott

Tuesday, June 29, 2010

TDD for Operations

Software developers have enjoyed the benefits of Test Driven Development for a long time now. System Operations professionals have not yet been test infected.

Test Driven Development (TDD) allows developers to refactor and add new features with the security the impact of their changes are restricted to the intended components. System Operations professionals don't always have such a tool and rely in human knowledge to make sure all integrated systems will behave the same after a change like an OS upgrade.

In Software Development teams often create more code to test the executable code. What could be used for the System Operations case? Monitors!

Monitors are a nice analogy to the green/red way of writing code. Instead of writing a test that doesn't pass, creating the code and then seeing the test pass; operation professionals create a set of monitors which alerts until a certain component is installed.

For example, before installing a new web application, a monitor is created for watching if the web server is up. This monitor would alert until a web server is actually put in place and listens to the proper domain and port desired. Once completed, another monitor would be created for say the number of database connections in the pool and so on.

This approach allows for more frequent changes to infrastructure. If there's a solid deployment process with easy roll back of failed changes, software modifications can be pushed to production at any time at a low risk (Continuous Deployment).

Testing the application constantly in pre-production environments will ensure there are few to no bugs in the software; however, it doesn't ensure configuration issues are not present once it is moved to other environments. An option to mitigate this risk is to run a complete regression test suite against all environments.

There are tools which can effectively use functional tests as transactional monitors such as HP SiteScope. Transactional monitors based on functional tests are great, but it won't provide the more granular results an individual monitor does. As with regular functional tests, these monitors are great for detecting an issue, however, they don't help pinpointing the root cause quickly. If using functional monitors, make sure to include execution times. This ensures the monitors go off if the system degrades beyond agreed service levels.

The automation effort has slowly moved from development to QA. It is time for it to infect the operations teams as well. These teams will greatly benefit from deployment automation and integrated monitoring.

Tuesday, June 1, 2010

Component Teams as Open Source Projects

I share Michael Cohn's principle: component teams are not good and that they should be avoided [1]. One way I've been considering lately to avoid component teams is to create what I call private open source projects.

Component teams are attractive to software developers. They make them feel that their component is a software product. This sentiment is a good thing if it wasn't for the company not being in the software component business. Such an arrangement may lead to feature bloat and lack of focus on the company's core business.

The private open source model has all the same benefits of the public one, the main difference being the size of the community. In a private model the community would be restricted to members of the company.

The private open source model would require the same type of collaboration infrastructure privately that public open source projects create externally. The component team could then be reduced to a few part time committers. These would be selected by either meritocracy or management appointment.

The project that needs a new feature added to the component would either provide the feature itself, or fund a team to do it (Just like the commercial open source model). The committers would then review the proposed changes and commit them to the shared code base.

The private open source model ensures that features being added to the component are relevant to the business and not developer favored features.

Even when a component team exists, allowing others to contribute might be good. The next time the component team needs to expand, for example, it might consider hiring a contributor outside of the team.

Let's look at the downsides of such a model.

The committers might have a diverging idea of what the technical direction for the component is than the rest of the community. This conflict could result in fragmentation of the community and forks. In the past, external communities experienced forks that eventually got resolved e.g. the xfree project). In an internal community this possible fracture could mean that a component is no longer shared, as the community might not be big enough to sustain two projects. Hopefully, due to the homogeneity of the community the fragmentation risk will be low.

There are times when the crowd is wrong in vetoing a specific idea [2]. The worst case would be a disruptive innovation being discarded because the community doesn't understand it well. A mitigating approach is to have an incubator area like the Apache Foundation and others.

There might not be a community large enough to sustain a successful open source project. In some cases the component in question might be shared across multiple projects, but not multiple teams.

As with public open source projects, a complex code base might serve as an inhibitor to contributions. Another common complaint about public open source projects is the lack of documentation. A private one might suffer from the same affliction.

Sharing a reusable component is important and we should strive to do so. Just be careful with the creation of component teams as they might be hard to undo.



Monday, May 3, 2010

The Most Important Role in Scrum: The Product Owner

The product owner has a lot of responsibilities; one of them is to address 3 out of the 5 generally cited levels of planning. He or she is responsible for the Vision, Roadmap and Release Planning.

"The vision describes why a project is being undertaken and what the desired end state is (Schwaber 2004, p. 68 [2])." Without the vision, projects drift from release to release never fully achieving any significant ROI, and eventually being cancelled. I find it to be a smell when the team can't describe why a project is being undertaken.

Much has been written about the vision, and you can find more in this great article by Roman Pichler [1].

A product owner is also responsible for the product roadmap. This important artifact lists what high level features will be available on each release. The roadmap also creates a cadence to customers on how often and what to expect in a new release (if/when it is made public). The knowledge of the roadmap creates a sense of security on customers and can lead to a better rate of acquisition and retention.

The Release Plan lists all the minimal marketable features in a lower level of detail than on the roadmap. The product owner balance between the release and the time it takes to produce it. Release externally too often and you might fail to generate excitement about it, release not often enough, and risk losing your customer base to your competition.

A product owner's job doesn't end at the release planning, he/she will have to help the team break down the features into smaller stories so that they can be estimated, define success criteria, etc. In projects I have seen where the development teams were most productive (hyper productive?), the success definition or acceptance criteria was so clear, that the team was able to estimate the story with accuracy. The design and testing was simple, and the number of defects low. It is important to note though that this type of backlog grooming sometimes requires the whole team. The product owner should be able to rely on other team members to help.

In Summary, the product owner is the role that can make or break a product or project and a team as a consequence. It is not only responsible for making sure that the team is producing high ROI, but instrumental in helping achieve hyper productivity.




[2] Schwaber, Ken. Agile Project Management with Scrum. Microsoft Press. 2004.

Pragmatic's Product Management Framework

Sunday, April 4, 2010

Quality is More than Absence of Defects

For years I insisted in automated unit tests with a naive assumption that if you take care of the pennies, the dollars will be taken care of by themselves. I even watched unit tests coverage numbers closely to make sure they were high enough, but still, the perceived quality of the software was not good. It turned out that “I was penny-wise and pound-foolish.”

I couldn’t understand what these people were talking about!

  • Our code coverage was above 85%! A test code audit ensured that good assertions were actually in place.
  • Static analysis tools we were using didn’t show anything important.
  • The number of open bugs in the QA system was below 100, and a number of them were deemed as not important by the client.
  • Iteration demos to the user were successful with no serious issues detected.
  • There were no broken builds in our continuous integration process.
  • Quantitative analysis (cyclomatic complexity, Crap4j, etc.) pointed to code that was very good.

In other words, all of the standard industry metrics for quality signaled that the code was good.

As we started to ask more questions about why certain groups thought our quality was low, we discovered a few issues.

Our unit tests ran outside the container, so in some cases when we ran them in the container the application would not start up. When it started up, links were not available, or clicking on them would take you to a page full of display errors, etc. The unit tests were indeed verifying a lot of the functionality, but they were not verifying the UI logic or configuration settings. In our case, we were stopping at the controller level.

A second problem that we uncovered was that demos were being done on individual developer machines and not on an “official server.” So teams would spend a significant amount of time preparing for the demo. They were configuring the application on the individual machine by hand, and sometimes not using a build out of the continuous integration process.

Another common complaint was that the application would not run the first time it was deployed to the test environment, and would require developers to get involved in troubleshooting and re-configuring the environment.

The quality issue was not related to software defects, but an unpolished product. Complaints didn’t necessarily come from the end user, but internal stakeholders.

To solve the problems, we changed our definition of done to require in-container manual tests besides unit tests. This resolved a lot of small issues that were being found by QA.

One other important improvement that we are now undertaking is to fully automate our deployment. We expect that the same way that continuous integration improves the build process, this new process will improve deployment scripts/instructions.

With the deployment process automated, the next step is to at least automate a few smoke tests. This should allow us to quickly identify if the deployment succeeded or not and what problems we might have.

What did we learn from this?

  • Question/listen to all of your customers, including team members (Product Owner, Scrum Master and team) and some non-team members (e.g. Operations). Not all team members are comfortable speaking up during retrospectives.
  • Quality is a complete package, not only the absence of bugs. Your deployment process is part of the application as well.
  • Automated unit tests are not enough. Even if they are not purely unit tests and bleed into the functional and integration tests realm.
  • Question your definition of done. Is it complete enough?

Unit tests are a great tool; however, those passing should not be the extent of your definition of done. Keep your eyes on acceptance tests, load tests and others; they will help you avoid other possible issues besides bugs. And above all, listen carefully to all stakeholders in your project, having them in the retrospective doesn’t mean that they are speaking their minds.

Tuesday, February 9, 2010

How Agile are you?

Does it really matter if your Agile implementation is not ideal or complete? It does. However, it is more about the journey than the destination! Knowing your limitations and weaknesses will help you improve towards the goal of being more agile. In other words, it is important that you continue to inspect and adapt.

There are multiple tools to help you evaluate the current state of your Agile process [1,2]. The one I have used is the Nokia test [1]. It is not a perfect measure of where you are, and the scores can be debated. Agile processes are too complex and subjective to summarize to a single number or score. So focus on big areas and not on specific scores.

A good analogy is If you were trying to measure someone’s body temperature. The Nokia test would be closer to putting your hand on someone’s forehead than to using a thermometer. It will give you a sense of how above normal the body temperature is, but it won't tell you if it is 103F or 104F.

In 5 years using Scrum, our team has gone through multiple adjustments. Sometimes these adjustments were necessary because of internal factors, but most of them came from external ones (Reorganization, skills available, etc.).

It’s sad to say, but there were phases of our team where our Scrum implementation was almost perfect, but right now it is what Sutherland would call "Scrum Butt".

The Nokia test allowed me to perform a sort of Strengths, Weaknesses, Opportunities and Threats analysis [3] (SWOT). It showed me where our process is strong and needs to be nurtured, but also where we are weak and need to shore up support.

Not surprisingly, our weaknesses are on areas we thought we shouldn’t play a role. For example, we have a QA department that doesn’t report to the team, so QA practices are not considered seriously and left to that department to manage. We are working with the QA team on improving our common work practices.

Another area where we were weak was Agile Specifications. We don’t necessarily have Business Analysts on staff, so we are now using developers on creating these. Remember, Agile promotes specializing generalists.

As it is the case with iteration retrospective, perhaps we should invest on doing process retrospectives. It should not be as often as the iterations, but often enough to have an impact on the project.

A source that was very helpful as I created these action plans was John Little’s blog series on the Nokia test [4]. It helped me create persuasive arguments to present to the other teams involved and upper management.

Using an external tool helped me take a step back and assess where our team is in our journey to being agile. It was a great experience that I intend to repeat more often.


[1] Online Nokia Test

[2] Comparative Agility

[3] SWOT link

[4] John Little’s blog series about the Nokia Test

Monday, January 4, 2010

Do Matrix Organization Foster Bad Behaviors?

Matrix organizations deliberately creates a power struggle between project managers (PM) and resource managers (RM) causing the whole system to be dynamically unstable. In other words, energy in the form of conflict resolution must be spent to keep the organization stable and working efficiently towards business goals.

There are three major types of matrix organizations: balanced, weak and strong, which are defined by the PM role in projects.

In a weak matrix, the PM is responsible for coordinating the tasks, not for actually delivering them. Their role is to be the liaison between multiple RMs delivering work.

As a result of the RMs managing all the work, PMs have less control over the project and work unrelated to the project might get done without business approval.

In a strong matrix the PM and the resources assigned to the project work closer together, and the PM is accountable for the delivery of the project. The RMs are responsible for human resource tasks like hiring, training, and so on.

This organizational type promotes a healthier alignment of resources to business goals to the detriment of technical tasks that might not have clear upfront business value defined (healthy refactoring for example). The team is concerned with the success of the project and not the long term health of the system/product.

To address this issue, sometimes purely technical structures are created. Architecture Review Boards get formed to make sure that systems are being designed properly for example.

Balanced matrixes are the ideal where the power is perfectly shared between PMs and RMs, little direction is provided though on how to achieve it. It seems to rely on interpersonal skills of RMs and PMs.

The saying “absolute power results in absolute corruption” does not apply when the interests and the power are perfectly aligned. In other words, when the person responsible for the business success of ventures has absolute power to make decisions on its behalf the corruption is good for that venture.

In matrix organizations, the power struggle is often between well intended technical individuals (PMs and RMs) trying to best serve the business. When there’s a clear business owner that yields strong power, conflicts are cleared and addressed quickly in a way that best benefit the overall venture.

I should be clear though that conflicts and struggles might not happen only between PM and RMs, but amongst the multiple RMs as well and that this is not the norm necessarily.

Concluding, a matrix organization requires energy to resolve conflicts, without a clear mechanism to resolve them, the organization might regress into outright bickering.