Is IT Collaboration an Unnatural Act: Part 3


I started discussing the organizational change implications of collaboration a couple of posts back, then introduced a specific case study in the last post.  Now I want to pick up that case study and look at how it played out.

The team rapidly fell into two camps – the enthusiasts (4 people) and the resistors (4 people).  The enthusiasts included the team member who had dug in at the very start, laying out the framework, style-sheets, setting up quick links to glossaries, help aids, and so on.  Aside from being perhaps the most technology savvy in the team, this individual had been an information architect in a former life, and really saw the benefits of managing the content at an atomic level in ways that could not practicably be done with MS Word.  He saw this content management approach yielding huge benefits down the road in terms of keeping the report as a “living document” and reusing elements in all sorts of different ways.  Another enthusiast was one of the senior executives on team.  This individual was not tech savvy, but had some experience with blogging.  This member did display some resistance early on, but the ex-information architect hand-held him through the first couple of days, and he soon became self-sufficient.  He also “got” the longer term vision of how the document as a Wiki would deliver important benefits in the future. Two other team members found no real difficulty going through the learning curve, and quickly began using the Wiki for both writing and editing.

The resistor camp had a very different experience.  The main forms of resistance were in the learning curve.  While it was probably not more than a few hours effort to understand enough of the basics of using the Wiki tool, most of the team members found this effort more than they were willing to try.  Exacerbating this learning curve resistance was the fact that team members typically liked to use plane trips and waiting at airport gates and lounges to do their writing activities.  Now, you might argue that airports almost always have WiFi hot spots, and that you can write plain text in MS Word and paste it into the Wiki at a later point.  This logic did not seem to wash with the resistors – the fact to them was, writing and editing on the Wiki was not as convenient for busy, traveling people.

As the deadline for the first draft approached, something interesting happened.  A glitch surfaced for some of the team members that made it impossible for them to sign into the correct area of the Wiki without administrator intervention.  It was only on the day before or, in some cases, the day of the deadline, that many of the members hit this problem – revealing that they had not even attempted to write or read on the Wiki. Their resistance to the Wiki was so strong that they did not even try. 

Given this mixed experience – half the team had got through the learning curve and saw the benefits of the new approach, while the other half never got through the learning curve, the experiment came to a crucial decision point as the draft report was to be released to a broader audience for review and comment.   Should the Wiki be opened up to the broader audience, or should the content be converted to a more traditional form (MS Word or Adobe .pdf)?  Regrettably, against strong objections from the ex-information architect, the document was converted to MS Word, and passed onto a professional writer for a final clean-up prior to review.  The argument was that this was a “snapshot in time” view of the report, and that the Wiki could continue to be used for its original purpose.  The reality was that as soon as the Word version was created and edited, version control was jeopardized, and the momentum behind the Wiki (fragile at this stage in the experiment) was lost.  The experiment was quietly declared a success, and people quickly went back to the more traditional way of working – trading MS Word documents by email or by posting to the collaboration hub. 

In the next post, I will examine some of the lessons to be learned from this experiment.

About these ads

Is IT Collaboration an Unnatural Act: Part 2


 

I posted recently on collaboration perhaps being an unnatural act for some activities in some types of organization.  I want to come back to this topic and look at it through the lens of one team’s experiment with a new IT-enabled (Web 2.0) approach to collaboration.  We will discuss the results and potential implications of this experiment.  I will do this over several posts.

The setting is a virtual team, geographically distributed but well-supported by an IT infrastructure including high speed DSL or equivalent, VPN, laptop PC’s, and so on.  The team members were part time on this project (mostly 10% or less) over a period of about 90 days.   Most of the team members were reasonably IT literate – at least competent users of MS Office products and the Internet.  Most of the team members also traveled substantially and were in other meetings, so there were periods of time (sometimes for the whole day) where they did not have Internet access via their laptops.  The team included senior executives and mid-level managers, and a couple of people who were external to the organization.

The project was of a type that most of the team members had previously experienced – even if this group had not all worked together before as a team.  The project had a well-defined process, one worked out over several years of similar projects, and all the team members were familiar with, and, perhaps more importantly, trusted the process – it worked well.  Collaboration had always been important to this type of project, and the process followed a mix of individual knowledge work (interviewing, thinking, writing, exchanging ideas) in-person team meetings (very few of these), teleconferences (many), and some WebEx sessions for those points in the project when a broader base of participation was needed.  One of the final stages of the project was the creation of a major report that would be distributed quite widely, both inside and outside the company and would generally represent the culmination of the team’s work.

The report writing part of the project called for an initial table of contents to be brainstormed and agreed – there was a standardized template for this, and principles for designing the non-boilerplate sections – the bulk of the document – to be worked out.  Individuals each agreed to take one or more sections, or to work in pairs – their common practice on such projects.  Traditionally, MS Word and PowerPoint documents for graphics exchanged by Outlook mail messages were the primary tools used for this type of work.  Recently, the team had begun using a company collaboration hub – there was still a great deal of learning going on, but by and large the hub had taken hold, and it was becoming a true part of the infrastructure.  One of the newest capabilities available to the team was a Wiki.

The team discussed at length the implications of shifting one or more of the writing, editing and distribution of the report from MS Office tools and email, to a Wiki.   They reached a decision to try the Wiki as an experiment, and some principles were worked out about how they were going to do this.  One of the team members took the initiative and jumped in, laying out the framework, style-sheets, setting up quick links to glossaries and help aids, even jump-starting most of the sections.  He quickly figured out how to do some very useful things such as create a macro that would at a click pull all the sections together into a complete report, so any member of the team could quickly see how the whole report was coming along.

In theory, the Wiki offered a host of advantages over MS Word and email.  Writers or editors did not need to worry about turning on or off “Track Changes” - the entire history of changes and who made them was always visible.  Team members could insert comments – either as in line text or as a box to be attached to an entire page or section.  There would be no emailing with the risk of messages being missed or stuck in in-boxes of busy traveling people.  The biggest benefits, it was believed, would come later.  When the report was released, the whole thing could be made available as a Wiki, with all the benefits of hyperlinks within the document and to the outside world.  The team believed this would be an advantage for people interested in reading or using the report.  They also believed it would be an advantage for the team’s or company’s ongoing use of the material – the report as a whole, and individual sections or subsections that could more easily be enhanced over time, reused or re-purposed.

All-in-all, once they had decided to do the Wiki experiment, they jumped in with apparent enthusiasm.  So, what happened?  Stay tuned…

Recession and Business-IT Maturity


Just as a lot of CIO’s were gearing up for a year or two of growth and innovation, many are now being told to “hunker down” for a period of austerity.  The usual first victims – cut travel, cut training, cut anything that smells like overhead!  That Enterprise Architecture initiative that was starting to pay off?  Cut it back!  The SOA pilots?  Put them on a back burner.  The Relationship Management training program? Put it on hold.  That IT Strategy Refresh retreat?  Let’s still do it, but cut it from 3 days to 6 hours, and do it in-house instead of at that resort hotel we were going to use.

Yes – the CIO has to be a good corporate citizen, but I find a pattern common among some of my CIO clients. Historically, when cost savings were needed, IT was the first to step up to the plate.  In fact, smart CIO’s used the pressure of cost cutting to rationalize, standardize and simplify their IT environments.  There were plenty of cost savings to be delivered, and deliver they did!  And, they got themselves a leaner, meaner and more agile IT infrastructure along the way.  Problem is, that a few years later when cost savings were needed again, the CEO first turned to his trusty CIO.  “Mary, you remember how successful you were turning in that $70 million savings back in 2002?  And the other $60 million you found in 2006?  Well, I need another $50 million for 2008.  OK?”  What’s a CIO to do without limiting her career options?

I’ve been told by a few CIO’s what they’d like to say, “Last couple of cost take-out rounds, it seems that I was the only member of the management team that played!  In fact, several of the divisions got even fatter while I was wringing blood out of silicon stones!  It seems that the thanks I get is to take some more out!” And, of course, the CIO well knows that previous cuts came out of aggressive rationalization and consolidation of systems and infrastructure.  Those plays have been played, and can’t be played again in any significant way. The outsourcing plays that could safely be taken, have been.  If we go further into outsourcing, it’s not clear that we will get real savings, and it is possible that we will lose some core capability that we will later regret.

So, what’s a poor beleaguered CIO to do?  The first part of my answer depends upon where your organization is on its Business-IT Maturity journey.  If you are somewhere in the Level 1 Supply space, you almost certainly have lots of opportunities in rationalizing, consolidating and standardizing IT infrastructure and common applications.  If you’ve not already done this, recessionary times provide wonderful air cover to take the draconian measures you’ve always wanted to take.  If you are in the Level 2 Supply space, the options are somewhat less.  This is typically a great time really turn up the heat on demand management.  Shift the cost savings burden from the IT budget back to the business budgets where it belongs.  But help your business partners strengthen their ability to build realistic business cases – ones that have teeth in terms of ongoing results tracking and accountability for results.  It’s also a great time to turn up the heat on Enterprise-wide IT improvement activities such as Enterprise IT Portfolio Management and Enterprise Architecture.  These activities will increase transparency into the work of the IT organization, and can help surface new opportunities for cross-enterprise leverage and consolidation.  On top of that, look for opportunities to leverage Software as a Service (SaaS) and Cloud Computing plays.  These are becoming more feasible by the day, and already have a meaningful and growing community of very satisfied customers, including some of the largest global companies.

It you are in the lofty Level 3 Business-IT Maturity space, I suspect the topic is moot – as a CIO, you probably aren’t being asked to give up your first born.  You are just being asked to continue being a careful steward of the firms resources, and to perhaps watch travel spending and things that might have unintended and questionable “optics”.   As with the Level 2 folk, take a good look at SaaS and Cloud Computing – it is likely that there are places to take out some costs, or at least to continue to innovate and create value without large capital outlays.  Other than that, because of all the things you did to move through Level 2, most of the belt-tightening now is business belt-tightening, and, of course, you will do you bit for that.

It’s easy to sit on the sidelines and through a blog advise CIO’s on what and what not to do.  I know the realities are much tougher.  But I’ve also worked closely enough with some great CIO’s over the years to know that knee jerk responses to IT cost cutting requests don’t always turn out as well as was intended, and may just take you down a spiral of reduced IT services, leading to even more questions about the cost and value of IT, leading to a demand for further reductions, and so on.  “Tears before bedtime” as my very British wife is fond of saying about such situations.  Having said that, let me give some free advice on what not  to do. 

  • Don’t relax your drive to increase Business-IT Maturity.  Don’t back off on the changes you are making to leverage the convergence of business and IT, and to increase the value realized from IT investments and assets.
  • Don’t back off on looking for opportunities to innovate and create new streams of business value – look hard at the Web 2.0 and SOA spaces to identify potential experiments that can be conducted quickly and cheaply, and might pan out in a big way.  If you think the odds of finding a big winner are 1 in 10, then you need at least 10 experiments going on.  Maybe you’ll strike it lucky and find 2 big winners!
  • Don’t see a recession as a time to cut back IT activity and shift into cruise control.  Assume that’s what your competitors might do, and take advantage of the climate (and the hungry vendors out there) to drive to higher value.
  • Don’t let the business-IT dialog get bogged down on “costs” as the focus.  It’s about business value.

 

SaaS and the IT Organization Circa 2017


I’ve speculated before in this blog that Software as a Service was a significant force of change on IT organizations.  A colleague just turned me on to a (new to me) blog called Think IT Services, and its latest post Why IT Now Sees SaaS As A Savior.  In his post Jeff Kaplan cites several indications that SaaS is gaining acceptance by IT organizations.

I think that it’s difficult at this time to predict exactly how and how quickly the SaaS movement will spread, but it clearly is already a sourcing strategy worthy of considering.  And I have no doubt that by 2017 (the time-frame this humble blog tries to consider) it will be a dominant form of sourcing.

I’d also urge that IT leaders formulate their SaaS strategies in conjunction with their SOA strategies.  I’ve recently posted on the business implications of SOA (see Part 1, Part 2 and Part 3) that came out of the recent BSG Alliance multi-company research project on SOA Business Implications.   I believe that SaaS, viewed in the context of SOA, is one of many new sourcing strategies, and that much of the future software and its sub-components that run our companies and agencies will be “pulled from the ether”.  We will care far more about business functionality and results, and less about who write it and where it runs.

I acknowledge that there are still many issues to be worked out, and all sorts of questions about security, privacy, and vendor business models, but I believe these will be worked out satisfactorily in fairly short order (i.e., over the next 3 years).  It could also be, as Mr. Kaplan hints, that the economic climate into which we seem to have slipped, will provide additional incentives for CIO’s to be more open to SaaS possibilities.

Is IT Collaboration an Unnatural Act?


collaboration.jpg

Much of the excitement around Web 2.0, Enterprise 2.0 and social networking as it penetrates corporations lies in the power of collaboration.   There a many great tools out there that enable collaboration – in its many different forms.  And there are many communities out there that want to collaborate.  And then there are IT organizations.  With their increasing need to work collaboratively.  Both among IT professionals, and with their clients and customers.

But, from my personal experience in looking at IT collaboration at my clients, I’m wondering if there aren’t some anomalies around the typical IT organization, or associated with the typical IT professional that impede collaboration?  I don’t think it should be taken for granted that because it makes sense to do certain kinds of work collaboratively, people will – even given the best tools in the world!  In the corporate setting, for certain kinds of work, for certain kinds of individuals, collaboration is an unnatural act.  Of course, I’m not the first to connect the “unnatural act” descriptor to collaboration.  And there’s a substantial body of research into collaboration, and when and why it works well, and when and why it falls short.

Kenneth Crow, back in 2002, wrote a nice piece – an easy quick read – on collaboration.  He points out several actions on multiple fronts necessary for successful collaboration to take place:

  1. Early involvement and the availability of resources to effectively collaborate
  2. A culture that encourages teamwork, cooperation and collaboration
  3. Effective teamwork and team member cooperation
  4. Defined team member responsibilities based on collaboration
  5. A defined product development process based on early sharing of information and collaboration
  6. Collocation or virtual collocation
  7. Collaboration technology

Let’s take these one at a time and apply them to the typical world of the IT professional.

1. Early involvement and the availability of resources to effectively collaborate. From my experience, everyone in the typical IT organization is more than fully loaded with assigned “project” activity.  Over and above that, there are the “special projects” – often several – that people are expected to contribute to, but for which no time is budgeted.  Then there’s the question of “early involvement.”  Most IT operating models are not particularly set up for this.  Again, they are lean and mean machines that tend to engage “as late as possible” rather than “as soon as feasible.”  So, on point 1, IT may be a little disadvantaged on the collaboration front.

2. A culture that encourages teamwork, cooperation and collaboration.  Here I see a mixed bag, but mostly I do find IT professionals, given condition 1. are collaborative by nature.  Often in my consulting work, I need to assign homework to sub-teams.  In most of my clients, I am continually pleasantly surprised and how effectively team members who do not typically work together do collaborate towards meeting the project’s goals – as long as they have some amount of time allocated to it, they believe the goal is worthy, and they believe that there will be some sort of recognition (even if only a pat on the back) for their efforts.

3. Effective teamwork and team member cooperation.  Again, I see a mixed bag, but mostly positive behavior here.  IT professionals are used to getting things achieved through the efforts of a team.  As such, they are usually effective team workers.  The one downside here is that they may not be so effective at managing the environment within which the team works.  For example, tacitly accepting deadlines that are arbitrary and unachievable; failing to confront absent sponsors; failing to confront lapses in team members commitment to deadlines, and so on.

4. Defined team member responsibilities based on collaboration.  Here things can get squirrelly.  With the major projects to which resources are formally allocated, accountability and roles are usually clear.  To smaller or slightly more casual projects, member roles and responsibilities are often not so well spelled out, and things can fall through the cracks.

5. A defined product development process based on early sharing of information and collaboration.  This is often where things start to really break down. It’s a good news/bad news thing.  The good news – there is a ‘product development process’ (SDLC or whatever.)  The bad news – the ‘product development process’ was not created with today’s level and scope of collaboration in mind.  In other words, most IT shops like formal processes, but most formal processes were designed and perfected in a less collaborative world. 

6. Collocation or virtual collocation.  Usually not a problem except when collaborators are in other parts of the world – other timezones, other languages, or other cultures.

7. Collaboration technology.  Usually the technology is available.  The typical shortcoming here is the assumption that people know how to use it, and don’t need training.  This is exacerbated by the IT professional’s ‘macho syndrome’ – “I don’t need no stinkin’ training!”  I have seen this become a real collaboration show stopper!  This mix of ‘macho syndrome’ together with a lack of free time to learn a new tool frequently gets in the way of collaboration in and around IT organizations.  Often the ‘shoemakers children’ we sell ourselves short when it comes to basic infrastructural things such as training, community facilitation and content management.  As a result of this, just when IT should be leading the way with the use of Web 2.0 tools in the corporation, instead IT is frequently timidly engaged in collaboration experiments that seem almost set up to fail.

Are you using Web 2.0 tools?  Are your collaboration experiments everything they could be?  Do you feel that the IT organization is blazing the trail, and setting a shining example of collaboration at its most productive?  Letters on a postcard, please.

 

 

 

Business Implications of SOA: Part 3


The previous post and the post before that addressed the first 3 business implications of SOA.  Let’s discuss the other two major business implications of SOA.  First, with SOA it becomes important to understand the distinction between “enabling” and “actualizing” a business process.  Actualizing a business process embeds IT automation elements such as workflow into the process.  With actualization, the relationship between a business process and its supporting software shifts from passive to active.  If the old paradigm of business requirements being “thrown over a wall” to IT, with business solutions “thrown back over the wall” was ever appropriate, it certainly is in no way feasible as business processes and IT automation converge in the SOA world.  This byproduct of SOA means that a far more holistic approach to design must be taken, one that is driven first and foremost from the perspective of the total customer experience.

This also has significant implications for both business and IT operating models.  For example, the customer experience, business process and software become so intertwined that they can no longer be effectively designed in silos – customer experience design, business process design and systems design become one integrated process.  Designing services that can be shared across business processes versus those that are truly unique becomes a key skill and an important discipline, if SOA is to payoff over the long haul. This also means that the traditional roles associated with “providing requirements” and “gathering requirements” become diffused, and no longer work as a clean business-IT-business hand-off.

Now, add to this mix the notion of external services being provided from outside the corporate firewall – the complications for performance monitoring become significantly more complex than in an internal services only environment.  All this has significant implications for the future role of the IT professional, and the organizational relationships between business and IT.  These are the kinds of forces that are shaping IT circa 2017.

Business Implications of SOA: Part 2


In the last post, we talked about the first two business implications of SOA. The first is the rapidly evolving market ecosystem for “services in the cloud” and the increasing availability of plug-’n’-play “widgets” all made possible by the underlying standards and methods of SAO.  This innovation is equivalent to that brought about by the early gun makers with the introduction of interchangeable parts into early gunmaking in the late 1800′s. The second business implication is the fact that business competitive advantage lies in things that truly differentiate a product or service in the customer’s mind, so it behooves IT organizations to direct their energy and resources at these differentiators and take advantage of the services ecosystem for all else.

Today, I want to tackle business implication 3. in the figure above – Outside-in Thinking is Essential – a point that comes directly out of implications 1. and 2.

The design of business services with SOA initially costs more than conventional applications.  Purposeful design typically takes longer than simply jumping in to coding.  (I’m reminded of the early cartoon showing a room full of programmers, with the project leader leaving the room, looking back over his shoulder and saying, “I’ll go up and find out what the user wants, the rest of you, start coding!”)

In order to get the best return on the up-front design overhead, services must be designed for future reuse and future value.  The future ambiguity that well-designed services resolve up front provides long-term value while incurring short-term cost.  The trick of designing services for future reuse requires functional stability (i.e., reducing or eliminating the need to make modifications to existing functionality to accommodate changes).  Functional stability in turn requires a full market awareness of what your customers may want in the future.  Customers do not buy your products and services because they want your products and services – they want the capabilities and outcomes your products and services provide in order to solve a particular job, or fulfil a particular need.  If they can get those capabilities or outcomes in a new or better way, they will.

The key design elements for reuse include:

  • Fit – allowing variations of services based upon differentiated service quality, readily customized based upon the customer’s profile.
  • Continuity – recognizing that the customer’s overall experience is often the linkage of multiple events, rather than just a single event.
  • 1-Stop – recognizing that the customers interaction with us should not be confined by what they buy and how they transact with us today.  Again, given the earlier point about customers wanting to get specific jobs done, rather than simply wanting our product or service, if your current offering is not ideally getting that job done, you may be able to refine your product or service into something that is better at meeting that need.  Or, if the job they need done changes, you may be able to refine your product to meet the new need.  The loosely-coupled nature of SOA allows for partners to connect and help companies provide a more complete fulfillment of the customer’s entire range of needs.  1-Stop design can build customer loyalty as well as stave off competition.

We will pick up on these SOA design elements in the next post.

Business Implications of SOA


It’s time (perhaps even long overdue!) to tackle the thorny topic of Service Oriented Architecture.  I’ve been preparing an executive presentation on this subject, drawing on one of our multi-company research projects, so it’s fresh in my mind.

The research was focused on business implications of SOA, as this was a perspective and context that the research team found was missing from many of the early forays into this space.  Certainly, there are many worthwhile experimental and pilot projects going on in the name of SOA.  These are mostly useful learning activities, but unfortunately, the way many of them are framed, they may well be missing the most important aspects of SOA, and the dramatic positive impact SOA can have on IT organizations and the businesses they serve.

The biggest mistake begins when SOA is perceived to be a technical issue, and is approached as a largely internal (to the IT organization) matter.  Buried in the bowels of IT, most SOA initiatives are likely to miss the point, and fail to achieve their full impact – or, at least, take far too long for that impact to be realized.

Let me begin with the major business implications our research identified.

Over the next few posts I will take each of these points in turn and expand upon them.

1. A Market-Driven Ecosystem is Evolving

I’d like to illustrate this point with a quote by Eric S. Raymond from his 1999 landmark paper, The Magic Cauldron:

“…software is largely a service industry operating under the persistent but unfounded delusion that it is a manufacturing industry.”

Disruptive forces are rapidly impacting the traditional business model of software vendors.  At the same time, thanks to the Open Source movement and changing perspectives of what is “proprietary,” companies are re-thinking how they go about automating business processes.  Of course, the reality has always been that companies never actually want to purchase software – they want the capabilities that have traditionally been provided through purchased software.  If they can get those capabilities faster, at lower cost, with greater efficiency and flexibility, they will do so – at least, once the fear of shifting to a new business process design and automation paradigm has abated.

The “magic” behind SOA lies in the underlying standards and protocols that have grown out of the Internet – standards such as XML, ebXML, SOAP, WDXL, SAML, WS-Security and BPEL.  In addition to these standards, methods such as Object Oriented Programming, with its concepts of loose coupling, autonomy, abstraction and composability, along with the principles of “service orientation” at long last bring the promise of reuse and “plug and play” modularity to reality.  Together, these standards and methods provide for loosely coupled, coarse-grained, business-centric, reusable services.  Such services no longer have to be “hand programmed,” or even “assembled” from an in-house library of building blocks.  Now, and increasingly over time, they can be procured on the open market – often for free!

Think about the implications for an IT organization, circa 2017 (this blog’s title) when there is a thriving ecosystem of business services – from tiny micro widgets for activities such as name and address checking, to sophisticated business processes such as purchasing and supply chain management.  Imagine, for example, that someone in your IT organization had programmed a widget that would make it easy to show all you customer locations as pins on a map, with the size of the pin representing the volume of business you conducted with each customer this month.  Imagine further that an IT savvy business manager wants just such a capability.  How is he going to know you have one?  Chances are, he’s going to use Google (or another search engine) and locate such a widget in less time than it takes to make a cup of coffee.  The reality is that, at least today, the web is far more easily searchable than your internal object library.  And, of course, my hypothetical case that someone in your IT organization had already created such a widget is unlikely, so we are heading from a reality where:

1. Business users will meet their own needs by “pulling services from the cloud” rather than asking their friendly (but way too busy!) IT professional.

2. A vast supply of business widgets will become available to these business users.

Now, I don’t believe this is bad news for the IT profession – quite the contrary, it is great news!  You see, at the risk of stating the blindingly obvious, not all business services are of equal value.  We can help break down types of services based upon their contribution to value using ideas first articulated by Professor Noriaki Kano in the 1980′s, through what is known today as the Kano Model.  Many services are simply ‘table stakes’ – they fulfil basic needs – they have to be done, but they don’t differentiate you in the marketplace.   Name and address validation, credit checks, and so on are table stakes types of services.  Other services go beyond table stakes to meet performance needs.  They can, if done really well, provide customer value above your competitors.  Finally, there are services that can literally “excite” the customer – deliver something they were not expecting, and don’t get from your competitors.  These are the highest value, often innovative services that differentiate you in the marketplace.

So, the big opportunity, made possible through SOA, but not an automatic byproduct of using SOA unless you plan for it, is to focus the internal IT organization and resources on “exciter” services, and let the open market provide the basic and performance services.  Over time, the market will provide faster/better/cheaper “table stake” and “satisfier” services than you can or should provide internally.  Of course, I’m taking an overly simplistic position on this – focus the IT organization on differentiators, and obtain all other types of services on the open market if you can.  I recognize that life is not that simple.  But as a direction to follow, and a culture to establish, this perspective will take your SOA efforts in a potentially very different direction.

The challenges in moving in this direction are significant – not because of the technology, which is evolving rapidly, but because of “unwritten rules” that drive behavior.  (See earlier post on Unwritten Rules.) Typical ‘unwritten rules’ that conflict with this powerful new direction for service provisioning include:

  • People are measured on the performance of their function, not the total enterprise – for software developers, ‘performance’ is typically equated to ‘writing code.’
  • Taking a risk that has a bad outcome can be career-limiting.
  • Working with ambiguity is heroic, clarifying ambiguity is non-rewarding.
  • Satisfying your boss is more important than creating a great customer experience.
  • Implying that someone else does something better than us is taboo.

The real trick of getting the most out of SOA – indeed, exploiting the transformative powers of SOA, will be in overcoming these (and similar) unwritten rules and other more structural impediments such as IT funding models (that tend to be project based and unfriendly towards infrastructure development) and governance models (which often position IT architecture governance as a policing rather than enabling activity.

We will pick up on additional business implications of SOA in subsequent posts.

Managing IT Infrastructure vs. Platforms


platform-4-626.jpg

I was in an interesting discussion with one of my consulting clients recently.  I was with a group of IT managers responsible for their firm’s shared IT assets.  This is a large, global enterprise that has been on an aggressive journey over the last 5 years to transform business-IT maturity.  By any measure, they have been successful – rationalizing and consolidating a patchwork of data centers, networks, systems and dispersed IT groups – some “official parts of an IT organization”, others “shadow IT organizations” operating outside of IT budgets and controls.

In essence, this group is responsible for the firm’s “infrastructure.”  We talked about the definition and meaning of “infrastructure” in order to get a handle on their scope of responsibilities, and how these might change over the next 3-5 years.  A typical dictionary definition of infrastructure is: “the basic facilities, services, and installations needed for the functioning of a community or society.”  For IT infrastructure, I find Professor Peter Weill’s definition especially useful:  “The base foundation of budgeted-for IT capability (both technical and human), shared throughout the firm as reliable services, and centrally coordinated.”

Let’s examine the keys to this definition.

  • Budgeted-for implies conscious analysis, planning and funding.  This is critical for IT infrastructure given its characteristic that it tends to be invisible until it breaks.  Therefore, an effective IT infrastructure management group has a hard time getting funding for improvements – “If its working fine, why do you need more money?” is the typical refrain.  (Public infrastructures suffer the same fate – hence our bridges are falling down, and, as an Atlanta resident, I have to suffer continuous drought conditions while 20% of the water traveling from reservoirs to homes is wasted through leaky water pipes!)
  • Both technical and human helps correct the common misunderstanding that IT infrastructure is all cables, computers and disc drives.
  • Shared throughout the firm as reliable services introduces the notion of ‘firmwide sharing’ which sets scope, and ‘services’ which takes us to the disciplines of Service Management, and frameworks such as ITIL.
  • Centrally coordinated tells us something about how IT infrastructure is managed – note here, its not necessarily centralized management, but central coordination, implying Enterprise Architecture and Governance.

The big question for this group is how should they evolve over the next 3-5 years, given all the changes in the business ecosystem (Next Generation Enterprise, anyone?)  One of the key elements of IT’s contribution to business success and growth in the next few years is through the notion of “platforms.”  A platform is a set of assets whose roles and connections are defined so that they can be configured in a variety of useful ways.  My company, BSG Alliance, recently kicked off a new multi-company research project, Platforms for Business Growth, so the research team has been giving a great deal of thought to IT-enabled business platforms for some time.

“Platform thinking” originated with manufacturers who wanted to build a variety of products using standard designs and interchangeable parts. It then migrated to the software industry.  For example, as Microsoft Windows operating system became popular, partners began developing products to work with the Windows platform.  Today, companies such as Amazon offer their ecommerce systems as a business platform, YouTube provides a platform for embedding video clips, and Apple’s iTunes/iPod has become a successful entertainment platform.

A key element of platform thinking is easy and open connection and collaboration with customers and suppliers.  Customers want to collaborate in the creation of customized products and services.  Business partners offer an ever-growing variety of services to leverage. A flexible platform is an engine for growth – because the business is more nimble and responsive, because it is better able to connect and collaborate, and because it maintains a larger portfolio of “options” for innovation and future action.

So, what are the differences between IT infrastructure  and IT-enabled business platforms?   Building a business platform certainly depends upon a sound infrastructure.  But platforms also depend upon clearly defined and published specifications and ground rules for what assets do and how they connect.   As IT infrastructure requires the mastery of IT Service Management,  IT-enabled business platforms, on top of IT Service Management requires the discipline of Product Management.   For many IT organizations, this is a new and unfamiliar discipline – one rooted in the disciplines of marketing, and therefore quite foreign to the world of IT.  I will get more into the distinctions between Service Management and Product Management in a subsequent post.