Posts Tagged “estimation”

Estimates are a fact of life for most of us. And often – while not always – they are a necessity. If I weren’t using some form of estimation on my current projects, they would be twisted up like Sherman bow ties. And on fire.

This brings us to an apparent paradox. The larger and more novel something is, the more we need to put some estimates around it – lest it end up a railroad-side barbecue. Yet the larger and more novel something is, the tougher it is to devise meaningful estimates.

But this isn’t a real paradox. It’s just a starting point.

At the very inception of a project or product we don’t need a thorough estimate of everything. We just need a simple set of estimates that tell us what to label the tick marks on our deliverable-level cost graph. Were we to cost out our initial estimates, would we be measuring project deliverable in increments of one thousand dollars, or ten thousand, or a hundred thousand? This lets the organization and team know what league they are playing in and how much to invest in upfront estimation and project tracking.

Most things we need to do should not be difficult to estimate. Anyone who’s been around the organization long enough should know how long it takes to spin up a shared development environment, the average time to build out a screen, and the usual duration of system and acceptance testing. For the rest we use development spikes.

We use a development spike to tell us whether some technical activity is feasible and/or to tell us how long it could take to complete.

Here are some examples of development activities (and questions) that learn more about with a development spike:

  • Can we implement two-way SMS in our current infrastructure?
  • Can we reduce costs and time by using an open source charting package (or is it better to roll our own)?
  • How difficult will it be to upgrade to Ruby on Rails 3?
  • Can we do multiple file uploads without flash?
  • Will it handle the load?
  • How do we make these two systems talk to one another?

A spike is typically throw-away code. It’s only purpose (at least in this case) Is to give us enough knowledge to state a fuzzy estimate with confidence.

Comments No Comments »

I’ve seen it happen several times. The business requests an upfront, full-cost estimate for some feature or functionality. A business person writes something up – far simpler than the normal story-writing or requirements-drafting process. The development team reviews the write-up. Questions, clarifications and adjustments happen. An estimate is assembled and then KA-BLAM! – everyone is blown over by the sheer size of it.

When even the development team is left scratching its head saying: “Yeah, that looks way big to us too, but … ” you know there are problems bigger than whatever comes after that ellipses.

Granted, there are mitigating circumstances. The requirements – such as they are – may delve into a new and unknown domain. The technology may be novel to both the team and the corporate IT ecosystem. But even in these cases, if the team cannot mount a strong justification for an eye-popping estimate, you have to ask whether there are other issues lurking in the shadows.

In these cases, the two most common issues I’ve set alight are:

1. The delivery team lacks confidence in its own technical ability (due to a skills mismatch, lack of training, or other reasons). This issue goes beyond the time it might take to ramp up on new technologies, adjust to a new domain, or proof a novel design approach. Teams can readily explain all this in their estimates. But an intractable problem arises – with a huge estimate in tow – when the development team fears that it’ll never fully tackle a new technology, domain, or approach. Or, sometimes, it’s an unconquered legacy technology, domain, or approach frustrating future development.

2. The delivery teams lacks confidence in its business partner’s ability to honestly collaborate and openly negotiate with the team. It’s okay if the business partner doesn’t get the requirements solidified on the first, the second or third try. Agile teams know this can be difficult, and agile processes are geared to accommodate. However, when a customer repeatedly demands prompt and accurate delivery on half-baked requirements, even an agile team will take a tanker-truck of spay foam to the delivery estimate.

While the causes are dramatically different, the end result for each issue is the same. The team’s huge estimate reflects an inability to adjust and adapt. In one case the team lacks confidence in its own ability; in the other case the team lacks confidence in its customer. Regardless of the cause, these are issues that need to be addressed, as they impede far more than the team’s ability to deliver an estimate.

Sometimes a huge estimate is just a huge estimate. Sometimes it’s something much more. Next time you see a jaw-dropper of an estimate wrapped in flimsy explanation, it might be worth taking the time to find out why.

Comments No Comments »

Most project managers want to track actual effort and dollars toward the completion of their projects and deliverables. The goal is obvious and laudable. By knowing the actual cost of something we can provide more precise estimates the next time around.

However, I don’t find actuals useful. I have little faith in the quality of the data that gets collected. Low-quality data is, at best, not useful. At worst, it will lead you to wrong conclusions and bad decisions.

Four issues seriously degrade the reliability of actuals.

  1. We, as individuals, remember pain. And we are much more inclined to remember when we took 100 percent longer to complete a one-day task than when we went 50 percent under on a two day task.
  2. Professionals prefer producing results over pushing paperwork. Sometimes we don’t put any mental cycles to tracking and remembering our effort against discrete activities. When the time come to push out the paperwork we can do no more than look back at our estimates.
  3. Sometimes we don’t work in sequence. Tasks get entangled. Priorities shift. We deviate from our plan to help other team members. The work gets done, but there’s no accounting for where the effort went. Again, our only fallback is our estimates.
  4. There is an expectation that each of us contribute a minimum number of hours per week, regardless of whether one is an employee, consultant or contractor. Will there be a problem if my actuals don’t add up to 40 hours (or more) a week? What about meetings, email, chores and helping other teams — where do those things go? No matter what reassurances you provide, too many of us will ensure that our weekly actuals total to the company work week, regardless of our actual effort toward shippable product.

Is it really worth spending your team’s limited resources on this?

Comments 5 Comments »

Once you realize that you’ve put yourself in a deep hole, please please please stop digging.

In the past few years I’ve been involved or drug into countless discussions about teams that have incurred technical debt. And I must admit to initiating more than a few of these discussion myself. The funny thing is that the theme of these discussion is almost always the same: “How do we get the business and upper management to understand that this system really will implode if we don’t start prioritizing some of this debt.”

I cannot, right now, recall a single time when the conversation’s theme was: “How do we stop digging?” I admit, that must have occurred … at least once … but I cannot recall it right now.

Yes, there are real business decisions that cause us to incur technical debt. Technical debt often results from a prioritization ethos that focuses on low-estimate patches and sidelines large or even mid-sized features. It can also result from a focus on revenue generation that entirely ignores cost savings. For example, imposing routine manual activities that may be better handled by simple ad-hoc reporting systems or crud maintenance screens.

This said, I’ve never seen a technical debt backlog that did not also include–or was even overweight with:

  1. Gaping holes in test coverage,
  2. Swaths of documentation-free code,
  3. Opaque data structures, and
  4. Duplicate methods and unnecessarily complex design.

All these issues (and more) result from team-level decisions, not business prioritization. Two months into a project these decisions cannot be undone without business consent. Moving forward, however, the team can stop making these decisions. Instead of decrying the existing technical debt, the team can first focus on disciplined processes that will reduce (if not stop) the team’s decent into further technical debt.

Clarifying and enforcing the team’s definition of done is the best way I know to quickly reduce the team’s creation of technical debt. Specifically, the team needs to come to an agreement about the documentation, testing, refactoring, and other activities that must be complete before a task, user story or feature can be declared done. The team and its individual members must also be mindful to account for these activities during estimation, thus ensuring that there is time to do things right the first time.

Simply put, the team needs to start to tackle technical debt by first enforcing internal discipline and best practices. Once these are established, the team may be surprised to find itself on much more favorable ground when it comes to tackling business-generated technical debt.

Comments 1 Comment »

I’ve been using Ideal Days to help teams and help me plan work and measure progress since 1999. In most environments, this agile tool is far superior to the use of any duration or date-driven approach. Now, however, I much prefer the term Effort Days. Read the rest of this entry »

Comments No Comments »