How Good are your IT Capabilities and How Good do they Need to Be?


This will be the first in a series of posts about assessing the “goodness” of IT capabilities, both in terms of your current state (how good your IT capabilities are) and ‘desired’ state (how good they need to be).  We will get into the dimensions of ‘goodness’ as well as assessment methods.

I’ve been designing and facilitating IT capability assessments for over 20 years, and have conducted hundreds of them – both as part of multi-company research projects that generated a substantial base of assessment data, and through individual assessments as part of consulting engagements. Over that time, I’ve developed a number of assessment principles I’ve found to be important.

The Process Is More Important Than The Results

There are several aspects to this.

  1. People don’t like being assessed, but they love being part of an assessment process!  By and large, people like to know how they are doing, especially from an organizational perspective.  But they are mistrustful (rightly so!) of consultants or other ‘agencies’ that come in and assess them or their organizations.  So, I’ve always taken an approach where I am a facilitator of a self-assessment process.  I bring the process (which the client and I may agree to modify to accommodate specific contingencies), experience to help them through the process, and act as an impartial ‘judge‘ to resolve differences of perspective, opinion or interpretation.
  2. The process must be transparent.  If people don’t understand or buy into the process, they will never buy into the results!
  3. The process should be repeatable.  Like a meaningful scientific experiment, the process should lend itself to repetition with consistent results.  In fact, repetition over time may well be important to sustained investment in capability improvement activities.  Too many assessments are conducted, discussed and then swept under the table.  This is a travesty!  Not only is the assessment wasted effort, but it may also be that much harder, or even impossible, to get people to participate in future assessments.  “Why should I bother – the last time we did this it went nowhere!” is a fairly common refrain.

The Results Must Be Actionable

The results should let you know:

  1. What needs to be done to improve capability performance.
  2. Where the greatest urgency lies for capability improvement.
  3. What it will take for a given IT capability to be improved, and to what benefit.

The Results Must Be Multi-Dimensional

This actually gets to the question of “goodness.”  I believe there are three important aspects of “goodness” as it relates to IT capability:

  1. Performance – this gets to efficiency – what resources it takes to achieve a given result.
  2. Value – this gets to the effectiveness of an IT capability – what benefits are being derived from the capability.
  3. Health – the ability to perform and deliver value over time.  We’ve all seen heroics, where, for example, a project team moves mountains in the final weeks of a project by working 20 hour days, 7 days a week.  It’s a wonderful thing to behold, and sometimes is necessary and may even promote ‘good health’ for the organization as people pull together and participate in a ‘miracle’.  But it’s not sustainable.  Expecting people to perform at a sprint when the course is a marathon is both dangerous and demotivating.

Process-based Assessments Only Go So Far!

We are all familiar with the SEI CMMI type maturity assessment.  These typically assess a capability’s maturity as somewhere along 5 levels:

  1. Initial
  2. Managed
  3. Defined
  4. Quantitatively Managed
  5. Optimizing

I believe maturity assessments such as this are appropriate for capabilities that are heavily process-dependent.  These include IT operational processes – highly predictable, repeatable processes.  But, drawing from Henry Mintzberg‘s discussion of standardization many years back, (see Mintzberg’s “Structure in Fives: Designing Effective Organizations”) not everything demands standardization of work processes.  If the goal is to make work consistent, repeatable, predictable and of high quality, there are three approaches:

  1. Standardize the work processes
  2. Standardize the outputs – i.e., the deliverables from the process
  3. Standardize the skills – i.e., focus on the people and their training

Typically, all three types of standardization apply to varying degrees – the mix being a function of the nature and complexity of the work you are doing.  For highly complex work (think brain surgery) the emphasis is on the people, which is why surgeons go through years of training, board certification, residencies, and so forth.  It’s no use handing them a detailed process to follow and expecting an untrained person to achieve a quality result.  For work such as bridge building, the emphasis will be on the deliverables – various types of blueprint, work breakdown structures and so on.  For routine, sequential work, the emphasis will be on defining the tasks to be followed and the sequence in which to follow them.  Ideally, the work can be so ‘routinized’ that it can be automated.  (Think data center operations and the shift over the years to ‘lights out’ data centers.)

The graphic below illustrates this concept.  Detailed processes are great at helping manage work that is routine and sequential in nature (which is one of the reasons why ITIL has gained so much traction in the last few years.)  For work that is inherently collaborative, and may require more visual enablement, standardizing on deliverables may be more apparent (think discovery and solution delivery).  For work that is more complex and exploratory, training and performance support systems are more appropriate.

For more on the different approaches to standardization, see my post, “Are Your Processes Setting You Free?  Or Holding You Back?

Please join me for the next post in this series where we will drill further into assessment dimensions and processes.

 

Graphic courtesy of Take On Torah

Enhanced by Zemanta
About these ads