Archive for the “Thoughts” Category

Just yesterday, Tuesday, our team was stuck on the wrong side of a very tight deadline. We needed to reach a significant milestone by the end of the day Friday. However, neither the team nor I had confidence that we would make it.

Now, just one day later, everything has changed. The entire team is confident that we’re going to complete this milestone with time to spare.

What happened?

Well, on Tuesday we already knew what the real issue was. A couple months back we decided to use a framework that was new to some members of the team. As it turned out, the learning curve was steeper than we’d realized.

Two developers are primarily responsible for the development work in this framework. Jim knew the framework when we selected it. Tim did not.

Jim had a lot of work on his plate, that he quietly likely could complete by the end of the day Friday.

Tim had less work to complete, but much of his time would be spent learning enough about the framework to get his work done. Tim was confident that, with assistance from Jim, he could get his work done on time. But the framework is not well documented, so Tim – and the rest of the team – had little confidence that he would be able to complete his task by Friday on his own.

At noon on Tuesday we hit on an idea. What if we found an expert on the framework to remote pair with Tim on Wednesday and Thursday? That expert could be located anywhere in the country, or even the world. We could possibly take out two birds with one stone. We could hit our Friday deliverable. And we could get some much-needed training on the framework for Tim.

By 5pm on Tuesday we had a signed SOW with an expert in the framework who was available to remote pair over the next two days.

And now it is Wednesday — just 24 hours later. Jim is on target to complete his deliverables for Friday. And Tim, after a day of remote pairing, is likely to complete his deliverables by the end of the day tomorrow, Thursday.

Thanks to the power of the inter-webs, a team that collaboratively thinks its way through tough spots, and management that is willing to back outside-the-box solutions.

Comments No Comments »

Screen Shot 2013-01-03 at 10.45.30 PM

Shut up shuttin’ up.

I don’t like unsolicited emails. Period. But as I’ve tried to get the word out about ShowMojo I’ve acquired some measure of temperance. It’s not easy getting in front of customers some times — especially the ones you can help the most.

Nonetheless, some companies don’t get it and just go way too far.

I received a mass email this morning from a service that we pay for to promote ShowMojo. I have zero interest in these emails and dutifully clicked unsubscribe. I was happy to do it. I had no hard feelings against the service and understand that some of their customers want these emails.

I was truly dumbfounded by what hit my inbox next. I’ll call it the: “Did you really mean to unsubscribe?” email. Choice quote from said email: “Was this a mistake?”

Yes. You’re so right. I accidentally hit the 5-point-font, off-white, no-underline “unsubscribe” link on your email then successfully (but completely inadvertently) navigated your intentionally-misleading “subscription preferences” webpage obstacle course. Absolutely I want you to continue to junk up my email box. Stupid me

But wait-a-minute. Quite irrationally, I asked my team if we currently use this particular service. The answer was no. Canceling was easy. And every $30 a month counts.

Oops. Stupid you.

Comments No Comments »

I cannot image where we would be without automated testing, continuous integration, Rspec, Cucumber and all the myriad tools and techniques available to build dependable, quality software product. But this doesn’t mean issues and errors don’t slip through.

A few months back I implemented a practice that I had considered using for years. I’d never done it because of the pushback I received from the teams I worked with. I regret not having done it years ago.

Every time a user encounters an error in our current product (ShowMojo) everyone on the development team gets an email. Not just for the Red Rails Box of Death, but for the little things that don’t work as well. The Javascript widgets that misfire, or fail to fire. And any time the system encounters an error, no matter how non-obvious it is to the user.

They may not be aware of it, but our users never encounter an error on their own. That error lands like a thud in every development inbox. It’s a thud because it’s unexpected. It’s a thud because it’s a call to action to the team: “Make it stop.”

When we first started this practice, there were dozens and dozens of errors a day. We had to categorize, tag, and prioritize them. It took several weeks to reclaim our inboxes. And we had to do this while pushing out new features. So there was a constant tug-of-war between current user experience and adding new value.

But one day came the silence of no errors. And it felt good, because we knew that’s how our users must feel  — even if they didn’t know quite how or why.

This isn’t the same as collecting errors are scanning logs, which can be easily forgotten or simply deprioritized. This might be difficult for huge products and strained teams. But this is something I’ll strive for with every new product, regardless of the initial resistance from the team.

It doesn’t feel right having the wrong answer for this question: “Do your users face your errors in solitude?”


Comments 1 Comment »

Do you think you are a rockstar project manager? Can you roll out an agile process and leap the tangle of legacy waterfall hurdles without breaking a sweat? Can you walk unaided from a fight club thronged with hackers, cowboy coders, support junkies and alpha heroes? Want to prove it?

I’ve got the acid test for you.

Identify a small product that you have always wanted to build. Commission something that you could use on the job or at home. It needs to be something that requires more than just you to complete. It must require a budget, paid help, realizable value, clear goals, and a plan. In other words, scope and fund your own project.

Yes. You fund the project with your own money. That’s what makes this an acid test.

For me, for example, this something is to rebuild and productize a web-application that has become a critical part of managing the rental properties my wife and I own. We have used two different prototypes over the last two years and I firmly believe this product captures a soon-to-be-ubiquitous element of property management. So I am taking the leap.

This PM acid test assesses whether you have the skill to contemplate and manage everything about your small project (and the product on which it is based). Either directly or via delegation you will need to:

  • Define your own requirements and break them into plannable features.
  • Identify a MVP (minimal viable product) and keep the project aligned to that vision.
  • Find and contract with the professionals who will complete the work.
  • Forecast and manage the development schedule.
  • Identify the technologies to use and the deployment environment.
  • Vet archiitecrure and design decisions.
  • Review all completed deliverables from a functional and technical perspectives.
  • Balance new features versus technical debt.
  • Do anything else a project manager, product manager, product owner, stakeholder or sponsor would do or delegate.

Of course you can get help on any or all of these activities. You can hire an architect to make decisions and review code. You can hire a QA professional to write scripts and run tests. But you still need to acclimate these professionals to your product and vision. And any additional hands will cut into your budget.

Even when delegating you still need to preform an informed review of all significant work products. Delegation without review is an elephant trap.

The only certainty around this acid test is that something will go badly. If you hire four contractors at least one will be a dud – you’ll pay someone to clean up that mess. Your own conception (and delivered specs) of the system will never be free of potholes – someone will need to go back and patch those, or dig up the road and start over. And your vision of the product will never be communicated so clearly as to avoid every misstep – that’s more rework still.

You will need to be on top of everything. Or you can sit back and watch Team Burn Rate blow half your budget in two weeks. Whenever anything goes wrong it will be your fault. You did not give clear requirements. You hired the wrong person. You changed your mind. You let one decision sit too long and made another decision too early. You let a technical issue sit unaddressed as it smoldered its way through your release timeline.

Can you complete a releasable version of your own, personably-funded product before you cut off the money supply because you can’t lose any more? That’s why it is an acid test.

Comments No Comments »

Superflu weekend struck our family — and all the families in our daughters’ playgroup. Based on preliminary reports (and my own condition) this could stretch into superflu fortnight.

This reminds me to ask the question: Has anyone planned for sick days in February?

I spotted a trend a few years back that I blogged about in 2009. Now I’m convinced that February is the Northern Hemisphere’s worst month for sick days. And I do account for February sick days in the delivery commitments that my teams make.

Of course, there are other perspectives. I once had a manager who, when contemplating the likelihood of illness-reduced delivery in February, rebutted: Plan at the standard velocity. That’ll encourage the healthy employees to work overtime and cover the shortfall.

He was all about morale, that one.

Comments No Comments »

I’ve been doing more than my fair share of deployments lately.

These deployments aren’t the one-click-then-beer variety you find on Heroku or in some seriously agile shops. These deployments are arduous day-long journeys, caravanning eight unique technologies, traversing multiple connected systems both up- and down-stream. And bad things do live in the water.

Seriously, it has been a lot of hard work and some difficult times reliving past mistakes. So here’s my list of freshened lessons that make big deployments suck less:

1. One group chat and one conference line. Don’t use email for the deployment team. It’s slow. terrible at threading conversations. And, somehow, people always get dropped off the address list – or appended to the cc field 37 times. A dozen individual chats and phone calls are terrible for anything but proving why Europe mothballed the semaphore back in the 19th century.

2. A deploy with fewer people is better. An all-hands-on-deck deploy may sound comforting. In reality. that’s just more people in the way, arguing over the best corrective path, and slowly burning out while standing at the ready. Deploys with fewer people – with clear responsibilities and fully prepared for the tasks of the day – beat mob deploys any day of the week.

3. Have your support people at the ready. Just because your core deploy team is fully briefed and prepared doesn’t mean you have all the expertise you’ll ever need. Know who else you’ll need if the fit hits the shan. Make sure that each member of this auxiliary group is ready to drop in at a phone call’s notice. Collect and distribute the best contact information for that auxiliary group to each member of your core group prior to deploy day.

4. Clear communication channel. Who owns outward communication and in what direction? People who need deploy status updates could include stakeholders, upper management, cooperate support teams, and auxiliary team members. They will not all want or need the same status reports. (Hint: you don’t want to send a 100-line deploy plan every hour to your stakeholders.) You need to decide in advance who owns which channel, the frequency of status updates, and the content of those updates.

5. Clear decision-making. When your app server and the database stop talking the last thing you need is a team that descends into bickering over which server to kick first. A deploy will involve a variety of experts, but there should be a single person in charge who – once all voices have been heard – calls the shots.

6. Clear accountability. The entire deploy team should know who is responsible for what well before the deploy begins. For me, that means one owner for each discrete part of the deploy. This separation may be by system, technology or product subteam – whatever makes sense for the deploy. Finally, only core deploy team members own deploy tasks. Sometimes a non-team-member is responsible for completing a task, but we still want a core deploy team member accountable to see that it gets done.

Comments No Comments »