Ideas

When Helping Is Really Helping

posted by Dave

"It's not the notes you play, it's the notes you don't play."

I've been thinking about this famous Miles Davis quote  .. When you're talking/ doing/ working - it's not just about the work you're doing; you have to consider the work of those around you. How does it complement it? Listen to and watch your environment - what can you learn before acting? (Remember W.A.I.T..)

Have you ever been working on something when someone comes up and pitches in without any idea what you're trying to do?  I was standing on a high ladder recently, precariously balanced while trying to paint a high peak on my house. My friend came up behind me and said, "You're scaring me up on that ladder!"  Now, I appreciate the concern, but startling me and distracting me isn't going to improve my safety.  Or when I'm moving something heavy and someone grabs the other end of it without telling me they are about to change the weight I'm balancing. Or when I'm swinging an axe and someone steps in to hold the log steady -- OK, that didn't actually happen, but I hope I've made a point. Help isn't helpful when you're just *involved*.  Don't look for ways you can just "lend a hand" - look for ways you can optimize the work.

When you've been in a band for a long enough time, you can predict your bandmates, anticipate where to contribute, and have tacit agreement built on trust - The above steps still hold, but they are quick, invisible and allowed.  That's the kind of team we all strive for and long for; and you build it through respect. Following the steps above are one component of demonstrating that respect.

Traits of Leaders

posted by Dave

Just for fun (OK, just for a staff development exercise I was doing at work once, but now it's purely fun.  So fun.)

..

I asked my network,

"List the top 3 attributes that come to mind when you define/ characterize *Leadership.* (Define leadership context in whatever terms you like.. i.e What makes a good leader? of business, government, peer groups, rats, . . whatever. )"

From 46 replies, here were the top traits:

What are yours?


(Here were mine:)

Outcome focused

Team Builder

Self-aware


Accountable was a top scorer in the poll, however, Accountable = 'Held to account'. ..is not an internal value; it's a standard that is imposed on you by others or by yourself. So I DQ'ed it. :)

Value Matching

posted by Dave

I have seen the above picture (or something like it) a few times in the past weeks...

If a PAYING client says, "can you do it cheaper?" it means they don't see a enough value in your product. But instead of giving them a half-assed (sorry, couldn't resist) product for less money, maybe asking more questions is in order. 

Don't put the cart before the horse. (Sorry - again.) ie Don't solve the dispute by unilaterally cutting your product - nay.  Ask questions first to identify value (horse) and then negotiate the exchange (cart.)

The Largest Liability

posted by Dave

One of the largest liabilities a product organization often has is: Inventory in the form of Work In Progress.  (OK accountants, I know that's going to draw some ire.)

Work in Progress (WIP) in operational terms is resource that's being consumed (salaries, materials, storage space) that provides no value - until the time when someone actually pays for it.

Don Reinertsen wrote in The Principles of Product Development Flow  that if you're going to fix one production problem, focus on the cost of delay. Delay causes WIP as well as flow interruptions that reduce profitability.

So what causes delays?

Where in your current processes are there delays? How can process changes and agreements among team members keep projects moving forward so that they deliver quality products on time while optimizing resources?

When Listening, Think 'Sympathy' instead of 'Empathy'

posted by Dave

We talk often about empathy in product design and in our business interactions.  But is empathy really what we want to express?

More likely than not, you do NOT know what that's like. You don't fully understand the other person's context: their history, their environment, their constraints, their resources, their perceptions, their other issues. You know what they have told you, which is less than 1% of the full problem.  (How many times have you asked for advice on a business problem and thought, "hmm, thanks for the advice, but you don't really get my situation.")

As a trained early responder to natural disasters, one of the things we stress most when helping an affected community is to listen, but not say words like 'I know', 'It'll be OK', 'It's just stuff - it can be replaced', 'You're lucky to be alive', or 'I've seen worse.' Phrases like these invalidate a person's feelings and come off as a judgement about how they should be reacting to their situation. We ARE trained to use phrases like 'I'm sorry this happened to you' or 'Tell me about it.'

Of course, product design is not a hurricane.  But we can take the same approach to understanding someone's needs.

Listen not just to provide a response, but to lead you to the next questions. "What do you think is causing that feeling?" "What would help?" "What can we do next to begin to fix it?"   And sometimes, just listening is all you should do.

Quantifying Process Improvement

posted by Dave

(OK, maybe that title is misleading...) I'm thinking about ways to categorize process improvement efforts.  After seeing some suggestions for "we need to... because ...." I got to thinking that a more finite list of 'becauses' might help to quantify issue impact with some common currency.

Examples:

(Easy ones:)

Less obvious, but perhaps of higher value - or perhaps just a refinement of the generic list above.  Sometimes quantifying cost is difficult, but it's easier to quantify:

Having a finite list enables us to look at a process improvement opportunity (that falls into one or more of those dimensions) and ask, "How WELL does addressing this issue improve that dimension?"  For example, maybe we want to fix the change control process (for whatever reason someone may have initially proposed doing that.)  How much potential do we have to improve Cycle time delays? What about Clarity? Does change control affect overproduction & inventory?

If we can quantify the extent to which a process improvement addresses each dimension -and- we have weighted each dimension with how important it is overall (eg Customer Satisfaction tends to rank high. Consistency is helpful, but may not be seen as the same level of importance...) ... we have two numbers that we can use to create an evaluation matrix. ie - Criteria Weight and Opportunity Score.  That helps to evaluate and choose the most valuable opportunities.  (Think along the lines of a risk score: Severity * Probability.)

Incidentally, the Analytic Hierarchy Process is the mathematical model for weighting and scoring opportunities & solutions:  Analytic hierarchy process.

Refining the numbered list above may help us find the best opportunities for improvement. 

“Doing” vs “Being” Agile

posted by Dave

Seems like every person I talk to about “Agile” (a word that’s been uttered so many times, I fear it’s lost its meaning) immediately wants to talk about product backlogs, sprints, standups, burndowns, …. i.e. Methods. They all want to talk about “doing agile.”  But I’ve yet to find anyone with any interest in being agile… i.e. Approach.

In my talk, What the Business Expects from Agile, I used the example of building a bridge to illustrate this. Upon further rumination, I think it’s a great metaphor for why doing Scrum isn’t being agile.

Do the simplest thing possible.

Do the simplest thing possible.

Do the simplest thing possible.

That’s agile.

So here’s my bridge example:

Our project goal is to hold a circus.

We’re going to hold the circus on a patch of land. (Take it as a given – a done deal.) To get to that land, we need to get across a gully that’s deep and wide.

So – discarding a bunch of requirements for the moment about circuses, the current objective is to get us over to that patch of land. We’ve surveyed the area and decided that a bridge over the gully is the best way to get there.

Let’s build a bridge.

Do the simplest thing possible.

Get me across that gully so I can see the patch of land. Where are we going to put the tent? Where will we put concessions? I don’t know until I can see the landscape. Let me stand on it. Build me a simple bridge. Architecturally stable – I don’t want it to collapse under my feet. The weight requirement is that it needs to hold an adult human.

That’s when someone says, “Are you going to have elephants in this circus? You’re going to want a more sturdy bridge.”

“Going to.” I say. (because I’m agile.) Right now, I just want to get myself across.

“That’s a waste – you know you’re going to need a bigger bridge later, so why are you wasting everyone’s time building a small one now?”

Two reasons:

1. I don’t want to wait for the elephants to arrive to start laying out where the circus tent goes. I want to get over there now.

2. I may decide not to have elephants. Or to helicopter them in. Or to hold the circus somewhere else. Or not at all.

So now who’s being wasteful? We build a superstructure when we know we need one and know how we’re going to use it.

I know it’s hyperbole to say that Scrum isn’t agile. Scrum fits in very well with Agile. But Scrum does not equal Agile. You can build a ginormous bridge incrementally, using scrum-like tools … but you may be building the wrong thing and may have difficulty changing course.

Because agile isn’t about building the thing, it’s about getting from here to there.

For support, rather than illumination.

Measuring the Narrowgoals

posted by Dave

Metrics, metrics, why must you be so complicated?

I’ve always thought that measuring stuff was pretty easy. If you’re just starting out trying to “implement some metrics” – it’s super easy to start. Measure whatever you can measure. Don’t start with “what you think you want to measure, if only you knew how to collect that data.” Start with what’s easy to measure. What can you learn from it? Learn how to use data for insight.

And as you learn how to measure, you start to get a sense not just of what to measure, but to what degree of depth those measurements are useful. So the “if we only knew how to collect that data” excuse starts to fade.

Mechanics aside, I always thought that metrics were pretty straightforward – even as your organization matures and your measures become more complex, you’ve grown along with that complexity, and built measure upon measure. It’s still straightforward.

But every truth has a counterpoint.

You can use metrics to validate (i.e. prove you are right) and you can use them to learn. And if you are using metrics to learn, that inherently implies that you need to change what you measure continually. Look at it this way: You can learn what times people normally eat lunch by standing in a restaurant and counting the people buying lunch. But you have to track specific individuals to determine how likely people are to be repeat customers. Because wouldn’t you operate the business differently knowing that your busiest spike was at noon, but also knowing that all of your repeat customers typically show up at 12:30? (Yes, you would.)

Where you are -vs- where you want to be

I had epiphany today about some of our operational metrics. Yes, we are measuring the right objectives. Are we delivering what we say we’re going to deliver, on time, on budget, of appropriate grade and quality? Fine. But we’re measuring what we think we’re capable of. We set a target and expect to achieve it. How are we doing?

Well, it’s certainly valid to measure what we hope to achieve. But what about where we hope to improve? Obviously, we can pick something that we think we need to improve and start measuring the heck out of it until it has higher numbers and everything is fantastic. But that’s just validation. What about illumination? We also need to measure what we’ve never considered measuring – what can we learn from it? Not to say we should just start measuring everything that moves (or doesn’t.) But return to that infancy principle of “measure what you can.” What do we not see? What data looks like nothing? Because “nothing” may just indicate really, really poor performance.

Measurement for discovery requires that you find new territory to search.