Monthly Archives: January 2013

You are browsing the site archives by month.

service intelligence transforms the service desk

faceFor decades IT has struggled to understand how end-users use IT.  The only point-of-reference being the service desk which is the only place where the user community interfaces regularly with IT. However, it’s not easy to provide a view on end user IT value when all you have as reference are issues.

So, you call the service desk and you get the standard interrogation, a number of questions to help identify your issue and send it, with a degree of accuracy, to the right support team. Even though updates have been made to service desks for decades the core capability, managing problems and incidents, remains the same (this situation is made clear by Chris Dancy and his example of  ‘Form Based Work Flow’).  Irrespective of how functionally rich the service desk you use is tickets opened and problem resolution metrics are still used to show how effective IT supports the business. This ‘suck less’ metric is not good. The problem with problems is problems are not the same. And that’s the problem.

Things that prevent people doing their job are typically reported (e.g. connectivity or passwords) but things that are not show-stoppers and just annoying  (e.g. a sporadic performance problems, jammed printers etc) are not. For many it’s just way too much hassle. The reality is, end users suffer from poor application performance more often than any service desk log shows. The user will talk with their colleagues to make sure they are not the only victim and possibly just wait  because it’s easier to assume IT operations knows about it or just wait for the problem to fix itself (e.g. less user traffic, moving to a different location or using a different device).

There is no place to go to understand the overall end user experience leaving IT operations to make the assumption that if there are no major issues then the user must be fine. The problem using the service desk as a way to deter user satisfaction is it’s not a monitoring system. It simply logs incidents and manages them in line with established escalation and outage procedures. The use of infrastructure monitoring tools provide a view of the health of one datacenter (or one component type) and the use of most APM tools provides a partial view of the end user applications performance. There have been attempts to provide end-user visibility to the service desk to create a more intelligent, business aware, solution.  The attempts include providing self-help options, end user keystroke logging and control over windows end-point devices (primarily windows). However, even with some of these capabilities being offered the service desk remains a reactive incident management solution focused on supporting issues already impacting the end-user.

As the end-user environment becomes more complex (agile application releases, cloud based apps, BYOD, increased mobility etc) the ability for service managers to support the business will become harder and the use of internal datacenter performance metrics alone will not be relevant in a world where the IT user is using applications disparate sources on a multitude of different devices. Service managers must be able to understand both what the business uses and how the business uses IT. The ability to understand end-user behavior will move the service desk from a passive incident reporting system into a solution that provides the IT support organization with visibility into how the business uses IT.  This visibility will enable service managers to manage incidents more effectively and identify business trends which will impact the IT services provided to the business. Understanding how the business uses IT should enable service managers to plan accordingly in regards to how the support organization is staffed to provide service quality.

If you are not looking you will not find it

IT operations remains a reactive practice, hoping that technology will make them more proactive. The truth is if IT operations is not focused on being proactive then it will remain in a reactive state no matter what tools are used.  The same can be said for the service desk. For it to become a service intelligence solution also requires a change in how the service managers use it. Products that provide visibility into how IT is used also requires the service managers to take an active role in looking for trends that indicate something abnormal is occurring (e.g. people using an application on a specific device dealing with poor performance).

The path to intelligence

Service desks have yet to evolve to the intelligent solution I’ve talked about however, forward thinking IT organizations are already starting to think this way. It requires traditional organizational barriers to come down between the service desk and IT operations. A high-bred role is created that uses APM tools (primarily end user focused products – EUAM) to look for potential issues. The information is then passed to the service desk – automatically or manually through the opening of a ticket and a dashboard at the service desk showing specific performance trends as they pertain to applications and end users. Even though the service desk and APM tools remain separate today using them together should provide benefits – once collaboration has been established between service managers and IT operations.

 The value

  • End-user experience is tracked against service levels with tickets opened proactively when an end-user (or end-user group) experiences a degradation in service
  • The service desk understands the current end-user experience, the devices being used, their location, their normal activity and the applications being used providing greater visibility into how the business is using IT.
  • The service desk is made aware of the end-user experience no matter where the applications are sourced (locally, internal or external). This enables accurate incident ticket assignment.
  • End-user activity is available for ‘play-back’ to help understand and identify what was being done at the time an issue occurred enabling effective root-cause analysis.

is there IT guidance without bias?

In the post titled community vs. the analysts I wrote about how I believed IT organizations use social and analyst content.  It’s relatively easy to explain why companies use content from both these sources, however when looking for guidance or answers to IT business questions is there anywhere where an unbiased advice can be found?

Everyone has a list of favorite and disliked vendors and products. A bad experience with a product can taint a vendors reputation and that of their entire portfolio. However, in the world of enterprise computing a bias doesn’t have to be related to a product and can be created because of poor support, poor service or a bad sales engagement.  As an analyst it was common to hear something I’d recommend came under attack just because the client had a historical problem with one of the product or vendors options given.  So IT bias can be towards anything, hardware, software, a vendor, a product, an approach, an organization, a best practice, or even a set standards. So where do you go if you are seeking advice with no bias? The obvious answer is the analyst community but is it possible to be truly unbiased?

opsleuth - bias

There’s no such thing as unbiased IT opinion. 

Analyst companies claim to provide opinion with no bias, social communities opinion is fueled by bias and vendors have an obvious bias. All have bias it’s just that some is out in the open and some is hidden. Vendors and social communities do little to disguise bias whereas analysts do everything they can to hide it.

Analyst bias can take many forms. Analyst firms are not equal – there are very large ‘tier 1′ companies and there are ‘tier 2′ or ’boutique’/'specialist’ companies. For IT operations management tier2 analyst firms their revenue primarily comes from vendors. Beyond basic research services this can take many forms, including paid for product endorsements or sponsoring primary research. This shows blatant bias and few would consider this type of content as more than just interesting.

Tier 1 analyst firms deny any form of bias and as corporations may not overly exhibit any however, they need to demonstrate a complete grasp of the market and this means establishing thought leadership to help to mold how markets are viewed and addressed. This typically results in the creation of best practices, methodologies, terminology and models.  Any vendor wishing to be taken seriously and be positioned correctly must conform to how the analyst company defines and articulates the market.  If they don’t then they risk being described and positioned in a way that may not be to their liking and this will emerge in research, presentations and client engagements.

Bias is normal and should be expected so when it comes to evaluating and understanding content the bias must be factored into your thinking. Vendor content is designed to show their products in the best light, social community bias is driven by the content providers which explains the diversity and analyst company research contains bias based on their belief systems and the individual analyst experience.  As a result analyst research should be taken at face value with the assumption that it’s based on a neutral standpoint supported by facts. Analysts are tweeting and starting to emerge in social communities unshackled from their editorial processes and less policed and protected by their logo.  So when evaluate analyst content with that found in social communities and vendor web sites then I would recommend the following;

1. Understand how each analyst company defines the market and positions the vendors (even if you don’t agree)

2. Read multiple reports written by the analyst to understand their position on a number of related research papers (look for themes/consistencies)

3. Be aware of the analysts history (the author’s resume)

4. Search for the analyst comments in social media (e.g. tweets, facebook, linkedin, blogs)

5. Compare the analyst research with competing analyst firm research to get a different perspective

crowd sourcing and the self-sufficient digital native

Screen shot 2013-01-08 at 1.40.27 PMIT savvy is no longer the exclusive domain of the IT organization. IT plays a pivotal role in many end users day-to-day activity and is as natural as breathing in and out.  Digital natives entering the market has led to the creation of a new type of IT user, one where self-sufficiency has become a way of life. Social IT activity rarely includes a support organization ready to leap to your aid in times of trouble instead the user relies on support found from using search engines, blogs, on-line documentation and through social collaboration.  When the IT savvy digital native enters the job market their ability to deal with (or at least attempt to deal with) IT issues (e.g. connectivity, access or file sharing) is significantly greater when compared with people entering the market only a decade ago.

Working with digital natives I find them to be more self-sufficient and believe IT problems can be solved faster if they are given the ability to do it themselves.  This emerging environment ‘should’ create changes into how users are enabled and supported. For example; if the end user is more self sufficient then service management tools should provide a lot more than a hotline support number. Service management tools could be enhanced with self help, intelligent search, automated recovery and importantly, crowd sourced information. Crowd sourced information could allow users to understand how IT is being experienced by their colleagues while also helping the IT support organization understand end user experience and aid root cause analysis.  This capability is especially important with applications sourced from diverse locations and the prolific use of mobile devices.  The reality is; a service desk has no clue where you are and what you are doing so when problems occur it’s just the beginning. The only view of IT service and what’s really being experienced can be attained from the end user and increasingly, by the end user. All this information is collected, analyzed and delivered without a single communication with a datacenter.

Crowd sourced application experience data provides a far greater understanding of overall end user status easier and with far less complexity, cost and effort than any traditional datacenter centric IT management tool. Of course, it does not give the deep-dive information many APM tools provide but in this case it’s not just about application availability (e.g. it’s about helping the end user become more productive and self sufficient.

This is not something found in IT operations management today, however the concept has been used in other types of applications (e.g. waze and GPS navigation) where crowd-sourced data provides work-arounds and options. For road navigation it could advise taking an alternative route due to an accident, for IT it could be to use an alternative printer or avoid using a mobile device in an area where performance is being impacted.

What I have described is a future state so for the time-being digital natives are going to continue to find ways to support themselves – no matter if it’s for their own personal needs or those provided by their employers.