Category Archives: End User

Within business end user is becoming a primary influencer on how, when and where IT is provided. The impact the end user has on how IT is managed will continue to grow as the business looks to them to understand the true value of IT.

Congratulations, your IT might be less sick today

help desk fire I came across an article in Computerworld titled “The Help Desk is Hot Again” articulating the revived popularity of the Help Desk. It explains that the Help Desk “serves as a vital liaison between employees’ mobile technologies and the networks, servers and applications that support them.” Help Desks certainly serve an important purpose however, this positioning feels slightly askew. For most IT organizations the Help Desk is where you go when you have a problem and need help. Help Desks do not understand how IT consumers are experiencing IT and are certainly not a liaison. I can see how there is a logical leap from issue management to evaluating the health of IT but do you go to the doctor when you are well?

Until recently, visibility into the consumer side of IT was not considered essential when measuring IT service availability. The assumption was that maniacally monitoring data center health provided enough data to show how effectively IT supported the business. For most organizations, IT availability and ‘end-user’ satisfaction is evaluated with metrics provided by the help desk, showing what went wrong and when. From the perspective of issues this may be acceptable but it hardly provides an accurate view on how the business is using IT. It would be like asking a doctor “so, how healthy does the world look today?,” where the answer would be “It looks pretty sick”.

This whole situation has been exacerbated by the use of mobile devices, the growth in non-corporate cloud-based application sources and the influx of people entering the industry who were born digital.  These new market entrants have learned to become more self-sufficient than any generation before and would rather have the flu than call the service desk. Many of today’s mobile issues are ‘fleeting’ with performance being a variable impacted by increasingly complex and congested network connectivity. For many, it’s easier just to wait it out.  Does the help desk capture this experience? No.

So, if the objective is to understand how IT is used and experienced, then you don’t start from the data center. The starting place is the IT consumer. This requires more than a set of tools giving visibility from ‘the edge,’ it will require IT support to organize and focus teams on IT consumer activity.  Measuring experience means understanding how IT is used, when it is used and where it is used, not just when there is an issue. Capturing, monitoring and analyzing IT consumer activity allows IT organizations to assess the true IT business impact, regardless of where the user is, what they are using or where their applications are sourced.

This approach is not going to be an easy for IT departments that have spent decades focusing on silo’d data center elements and back-end applications transactions. IT consumer activity monitoring is not an option. Users do not use one device, do not remain in one place and do not use just one application. IT innovation, mobility and IT consumer creativity will continue to push the limits of IT operations management with those able to adjust their IT management focus benefiting from greater IT decision making and business alignment.

The service desk must evolve to be a true high-touch solution and this can only be done when it is also used to monitor how all IT consumers are experiencing IT.   IT organizations that do not plan to focus on their IT consumers will be left struggling, trying to manage increasingly diverse IT needs using tools providing a datacenter centric application performance snapshot, stumbling their way towards the edge by trying to see through increasingly complex third-party service black-holes.

if NASA monitored like IT operations would they have made it to the moon?

rocket2In nearly every job I’ve had IT monitoring has been somewhere, either core to my day job or peripherally around the edge. Even though monitoring has been with us for decades it still attracts massive amounts of attention from IT organizations, vendors and Venture Capital. Red, green, yellow, yellow, green, red, how hard can it be?  There have been major shifts in finding new ways to understand the health of IT including; SNMP monitors in the early 1990′s and, more recently, the various flavors of APM products. For a software company to make a difference and successful selling a product in this space it really needs to innovate and provide something better. A lot better.  So I get tired when people say, “monitoring, it’s done isn’t it?”

It’s not. Not by a long long way.

Gartner published a report in May 2013 titled Market Share Analysis: IT Operations Management Software, Worldwide, 2012 (ID: G00249133). In this report it says that the 2012 application performance monitoring (APM) market is over $2 billion growing at 6.5% with the availability and performance monitoring market (IT infrastructure monitoring) being $2.8 billion growing at 7.6%. Even though these IT monitoring areas are considered separate market spaces the ideal is to combine them allowing IT organizations to understand the impact the IT infrastructure has on the applications and visa versa.  So when both areas are combined they become the largest IT management market segment with over 25% of the $18B total market. To put this into perspective the joint APM/Availability and Performance revenues (~$4.8B) is larger than configuration management, the second largest market segment, by over $1B which is also growing at a slower rate (6.3%).

Large. small, service provider, telco, SMB or enterprise, everybody has monitoring so the fact that it remains the highest growth IT management space is amazing. Even though it’s a huge market not dominated by a few vendors. It is a highly fragmented space with dozens of vendors and hundreds of tools.

Monitoring remains one of the most fragmented IT management spaces with tools from dozens of vendors ranging from $free to $hundreds of thousands. To remain relevant demands constant innovation with innovation coming from many areas including event collection, event consolidation, event processing, event reporting, ease of use, low complexity, high sophistication, product delivery, and product pricing and licensing. With the need to get clarity on IT services and also reduce the cost and effort to achieve it better ways to monitor are constantly being sought.

all monitoring is not the same
When people think of monitoring an image that comes to mind is of NASA and the way it monitors a moon launch. Dozens of people intensely looking at monitors anxiously looking for irregularities and working closely with all their colleagues to identify potential issues that may impact the success of the objective and the safety of the astronauts. Even though each person may have a different view of the health of the mission collaboration between the team members ensures that at an holistic view is understood at all times. Throughout the mission priorities change so does what and how each stage is monitored. In addition, the information displayed on the monitors is continually analyzed and correlated with other data with the objective to seek out potential issues that the individual monitoring displays may not make clear.  NASA monitors space missions with the assumption that something will go wrong demanding an immediate response to remediate the problem and ensure the success of the mission.

putting too much emphasis on the tools
For decades ITmonitoring ship professionals have used products to give them visibility into the health of the IT infrastructure which is monitored in fragmented piece parts with disparate non-collaborative teams all providing different vieship dragging astronautws on the health of IT. For many monitoring is accomplished when resources are available and unlike NASA most IT organizations assume everything is fine and look to monitoring to confirm a reported outage and to aid root-cause analysis.

IT organizations depend on tools to provide an understanding on the state of IT. Unfortunately IT continues to fragment and increase in complexity driving organizations to employ more monitoring tools in an attempt to gain clarity on overall IT health. However instead of making things easier to understand this creates additional challenges with each IT support organization providing increasingly different and potentially conflicting views on the health of the IT infrastrScreen shot 2013-07-24 at 10.49.14 AMucture. Some organizations using dozens of monitoring tools covering every aspect of their IT environment have no ability to clearly identify issues and the impact they have on the business. With each IT support team looking through different monitoring lenses the ability to gain and holistic trusted view becomes almost impossible.

avoid liability and attribute blame
When the business is impacted by an IT issue many organizations bring together the different IT support teams to help identify what the issue was, how it was detected and how to avoid the issue occurring again. Even though the senior IT executives do this to pacify and assure the business of IT’s competency and value each IT support organization will use their monitoring tools as evidence with which to prove either it was not their issue or show that the issue was identified and resolved in-line with company policy and service levels. This behavior changes monitoring from a proactive, issue avoidance practice to one where it is used to prove innocence and assign blame.

infrastructure availability does not equal application availability
IT problem optionsRoutinely IT support organizations use the statistics gathered by their monitoring tools to show effectiveness, IT availability and business value. Each IT component is monitored to a set of policies primarily derived  by how each IT team associates value to the components. The traditional 99.9% availability objective is still used by IT operations as a way to show IT availability. Unfortunately the business does not equate availability with how each component is functioning. IT availability is measured by the performance and availability of the applications and the support the IT organization provides. These two viewpoints on how IT value is measured creates confusion and conflict with IT support teams unable to comprehend the fact  that the business does not care about the individual health of each IT component. A business manager will assess the value of the IT organization based on the opinions and input of the  people who consumed the IT resource and not on a mountain of confusing, irrelevant technical detail that conflicts with  the IT consumer experience. In some cases this situation will drive the business to seek alternative IT providers for new applications and IT services.

how much are IT service quality problems costing business?
The reality is that while monitoring is employed in nearly every business that uses IT is not used effectively.  While tools for monitoring are designed to provide proactive warnings of issues the effectiveness of the tools can only be realized when they are used to show business impact augmented by an organization focused on proactive monitoring practices and collaborative team work. Being proactive requires more than just monitoring tools, it requires;

  1. an organization that actually seeks out issues
  2. information delivery mechanisms that the support teams will take notice of
  3. information delivered in meaningful ways, preferably associated with service levels and business impact

monitoring evolved
Even though monitoring continues to be updated it’s an evolution not a set of dramatic changes.  In the 1990s the focus was on the data center elements because for many that is where a majority of the IT resources were. Over time the need to understand how IT resources were being provided moved monitoring from basic availability to measuring performance and a set of processes and best practices to ensure specific outages and IT service degradations did not occur again.  More recently monitoring has evolved in multiple directions. The dynamic nature of the IT infrastructure demands that monitoring is able to keep up with constant change and business priorities.  This demand has created a new set of monitoring tools that dynamically discover IT components, establish relationships through various communication methods and dynamically map, in real-time, how IT resources are used in support of the changing needs of the business. The highly distributed and fragmented IT infrastructure created a demand for tools that can actively search and associate disparate data from disparate sources and then provide, through analysis, information on IT health that could not be achieved by the more traditional monitoring approaches.  And lastly, the way business consumes IT has forced many IT organizations to focus on the end-user experience.  Only by focusing on how end-users consume IT resources will the IT organization be able to fully understand and support the business.

Summarizing all this…
IT and business are synonymous. Monitoring IT like it’s a network and a bunch of servers going to result in the business demanding more relevant and accurate service measurement – specific to applications availability and performance and IT consumer experience.  The critical impact IT has on business means executives continually evaluate the support and services provided by the IT organization and assess ways for improvement.  For business IT value is a very easy metric to measure; availability, performance, responsiveness, flexibility and support. In addition, IT consumers have become major influencers of how IT services are evaluated, delivered and consumed demanding a different view to understand the health of IT services.  As IT consumers use IT resources beyond the corporate data center the value of IT is assessed as an overall experience no matter where applications are sourced, what access methods are used or where support is located. The only way to fully understand how the business views IT services is to monitor how IT consumers use IT.

High volumes of disparate event data creates confusion and conflict demanding technology that consolidates, correlates and prioritizes issues aligned with how the business consumes IT services.
IT organizations will still use tools that monitor specific IT elements as these allow specialists to have a greater/deeper understanding providing the ability to identify a problem’s root cause however, these types of monitoring tools are used as event sources feeding monitoring products able to consolidate, filter, correlate and prioritize issues in line with IT service delivery. The ability to achieve this objective demands technology that can easily integrate and associate data into information relevant to both the IT organization and the business.

a path to improving end user experience

smilie 2I don’t believe anyone can dispute the growing influence end users have on how IT services are chosen, sourced and evaluated.  This does not mean IT operations organizations are ready to fully embrace the end user as a specific focus.  Many assume application transaction monitoring and mobile device software update support is enough – at least for the time being.  The reality is it isn’t enough and treating the end user like peripheral hardware is not to their benefit. This is managing the situation – not enabling the end user.

Improving end user experience is not about keeping an eye on them or trying to support their mobile devices it’s about removing IT barriers, reducing complexity and making them more self sufficient and productive. This objective is best broken down into logical areas;

  1. Support
  2. Social Enablement
  3. Security & Resilience
  4. Productivity

Each area has a set of activities and objectives:

  • Support: Identify, address and report common/local issues, pre-emptive problem management and real-time end user IT status specific their individual needs and priorities.
  • Social Enablement:  Social, communication and collaboration tools to foster and enable information flow between different users with common interests, goals and objectives.
  • Security & Resilience: End user and device authentication, content protection and data protection and recovery.
  • Productivity: BYOD enablement allows the conducting of business from any device and location. Users download and given access to applications and access to local resources and information on company facilities based on their specific needs and within company policy.

It is unrealistic to think the objectives for each activity can be accomplished all at once. They are only achievable if each activity has a path containing logical, measurable steps.  This is also needed as each activity can have ties to others (e.g. to deliver a level of support requires a level of security and resilience).

In the paper Path to Improving the End-User Experience the activities are explained and broken down into the five levels (undefined, reactive, proactive, service and business) providing objectives to assess the current end user environment and improve upon it.

A barrier to success is IT operations’ need to enable the users from the datacenter perspective.  If the end user is the focus then the starting point is the end user (do IT users care about the datacenter?).  However, to show value a plan must have two perspectives, one IT operations and the other the end user.  In the paper each level describes the activity and value to both IT operations and the end user.  This allows IT operations to associate effort and investment directly with end user productivity.

Improving end-user experience, satisfaction and making them more productive increases a company’s effectiveness and makes it more competitive. It’s a no-brainer.

do IT users care about the datacenter?

datacenterThe datacenter – the IT business hub. Or it used to be.

End users could not care less about it. What they care about is applications availability and response times and the ability to get IT access from whatever device they choose and from wherever they want. The business increasingly makes decisions on what applications are used, where the applications are sourced and who supports them. There’s no love affair between the business and the organization called IT operations because it’s not about technology it’s about getting the job done. It’s not that datacenter availability isn’t important it’s just not important to users – the business measures IT value against the quality of support and applications availability and performance not servers, storage and networks.

Some will struggle with how monitoring the datacenter does not equate to understanding and measuring business availability. It was not so long ago companies providing datacenter outsourcing services would have a huge display in reception with topology maps showing a red, green, yellow status of the datacenter infrastructure. I can only assume it was designed to show control and understanding because I’d argue the computer room could spontaneously combust and no-one would be any the wiser until the end-users reported problems accessing their applications.

How many times have you thought “I wonder if the servers are performing well today?” or  ”I hope my files are backed up and secure”.  What you probably think is “email is slow, IT need to fix it now” and if data is lost or corrupted “IT had better get it back now”.  My point is this, the datacenter will continue to be critical to the IT organization responsible for managing it – not to the businesses that use it. For the business it’s all about the application – no matter where it resides or who manages it and the fact an application requires hardware and software to live is, from the user perspective, is irrelevent. It’s assumed.

In late December 2012 Netflix had issues. The fact it was over a holiday period made the problem even more annoying. It was a Netflix problem and twitter lit up with customer feedback for Netflix. Netflix blamed the issue on Amazon Web Services servers and said Amazon was addressing it. So, that’s ok then? It’s not a Netflix application problem – it’s a an Amazon server problem.  It doesn’t matter if Amazon’s servers were the real problem, it is Netflix’s job to make sure their applications are not plagued by a weakness in server capacity, performance, architecture or design – no matter who they decided to source this critical task to. Subscribers to Netflix do not pay Amazon.

It’s the same for any IT organization delivering IT application services whether they are internal or external to a business. Monitoring the datacenter to identify and solve issues is one thing – using the same element monitoring to try and demonstrate value to the business is another.  Managing the datacenter is mandatory, however using element based availability metrics as proof of IT business value and application availability is no longer acceptable.

From a business perspective the value of IT is assessed through the lens of their business users – not the datacenter.  This will increasingly result in IT value being assessed from the end user to the application source measured against services levels which means datacenter components can go up and down all they like as long as it doesn’t have a detrimental affect on business service levels.  With the growing trend to use applications from cloud based service providers who can tell where all the parts of an application are?  Netflix is hardly unique, architecturally, in the way it provides services. As more applications are made available in the cloud the location of the supporting infrastructure is likely to be in the hands of one or more additional cloud service providers.

So, who cares about the datacenter? The people responsible for managing it, developers, testers and business unit personnel who pay for capacity. For users and the business  - it’s all about the application.

service intelligence transforms the service desk

faceFor decades IT has struggled to understand how end-users use IT.  The only point-of-reference being the service desk which is the only place where the user community interfaces regularly with IT. However, it’s not easy to provide a view on end user IT value when all you have as reference are issues.

So, you call the service desk and you get the standard interrogation, a number of questions to help identify your issue and send it, with a degree of accuracy, to the right support team. Even though updates have been made to service desks for decades the core capability, managing problems and incidents, remains the same (this situation is made clear by Chris Dancy and his example of  ‘Form Based Work Flow’).  Irrespective of how functionally rich the service desk you use is tickets opened and problem resolution metrics are still used to show how effective IT supports the business. This ‘suck less’ metric is not good. The problem with problems is problems are not the same. And that’s the problem.

Things that prevent people doing their job are typically reported (e.g. connectivity or passwords) but things that are not show-stoppers and just annoying  (e.g. a sporadic performance problems, jammed printers etc) are not. For many it’s just way too much hassle. The reality is, end users suffer from poor application performance more often than any service desk log shows. The user will talk with their colleagues to make sure they are not the only victim and possibly just wait  because it’s easier to assume IT operations knows about it or just wait for the problem to fix itself (e.g. less user traffic, moving to a different location or using a different device).

There is no place to go to understand the overall end user experience leaving IT operations to make the assumption that if there are no major issues then the user must be fine. The problem using the service desk as a way to deter user satisfaction is it’s not a monitoring system. It simply logs incidents and manages them in line with established escalation and outage procedures. The use of infrastructure monitoring tools provide a view of the health of one datacenter (or one component type) and the use of most APM tools provides a partial view of the end user applications performance. There have been attempts to provide end-user visibility to the service desk to create a more intelligent, business aware, solution.  The attempts include providing self-help options, end user keystroke logging and control over windows end-point devices (primarily windows). However, even with some of these capabilities being offered the service desk remains a reactive incident management solution focused on supporting issues already impacting the end-user.

As the end-user environment becomes more complex (agile application releases, cloud based apps, BYOD, increased mobility etc) the ability for service managers to support the business will become harder and the use of internal datacenter performance metrics alone will not be relevant in a world where the IT user is using applications disparate sources on a multitude of different devices. Service managers must be able to understand both what the business uses and how the business uses IT. The ability to understand end-user behavior will move the service desk from a passive incident reporting system into a solution that provides the IT support organization with visibility into how the business uses IT.  This visibility will enable service managers to manage incidents more effectively and identify business trends which will impact the IT services provided to the business. Understanding how the business uses IT should enable service managers to plan accordingly in regards to how the support organization is staffed to provide service quality.

If you are not looking you will not find it

IT operations remains a reactive practice, hoping that technology will make them more proactive. The truth is if IT operations is not focused on being proactive then it will remain in a reactive state no matter what tools are used.  The same can be said for the service desk. For it to become a service intelligence solution also requires a change in how the service managers use it. Products that provide visibility into how IT is used also requires the service managers to take an active role in looking for trends that indicate something abnormal is occurring (e.g. people using an application on a specific device dealing with poor performance).

The path to intelligence

Service desks have yet to evolve to the intelligent solution I’ve talked about however, forward thinking IT organizations are already starting to think this way. It requires traditional organizational barriers to come down between the service desk and IT operations. A high-bred role is created that uses APM tools (primarily end user focused products – EUAM) to look for potential issues. The information is then passed to the service desk – automatically or manually through the opening of a ticket and a dashboard at the service desk showing specific performance trends as they pertain to applications and end users. Even though the service desk and APM tools remain separate today using them together should provide benefits – once collaboration has been established between service managers and IT operations.

 The value

  • End-user experience is tracked against service levels with tickets opened proactively when an end-user (or end-user group) experiences a degradation in service
  • The service desk understands the current end-user experience, the devices being used, their location, their normal activity and the applications being used providing greater visibility into how the business is using IT.
  • The service desk is made aware of the end-user experience no matter where the applications are sourced (locally, internal or external). This enables accurate incident ticket assignment.
  • End-user activity is available for ‘play-back’ to help understand and identify what was being done at the time an issue occurred enabling effective root-cause analysis.

end user activity monitoring (EUAM)

The focus on end user activity monitoring (EUAM) continues grow in importance due to the end user influence on how IT is used and how applications are sourced. Forward thinking IT organizations recognize the value of gaining insight into end user activity as it can enable more effective applications support, improved IT service and ensures end users are more productive.

However, EUAM not something readily embraced by end users who consider anything that tracks and monitors their activity a personal infringement.  In some countries privacy hurdles will need to be overcome requiring the tools that provide activity monitoring to establish levels of interaction.

Highway cameras are used to take pictures of cars breaking the speed limit policy with all vehicles under the limit passing with no picture and no record of their presence.  The same can be said for EUAM, the objective is to identify trends, abnormalities and performance degradations not to track and record all activity.  Today’s EUAM tools will understand what devices are used, the configuration, specified software loaded, application performance, activations and where appropriate, the location. However, unlike the speed camera the job of EUAM is focused on enablement not policing.

EUAM augments and enhances existing application performance monitoring tools, by providing a ‘front-end’ end user understanding of how IT is being experienced.  It allows IT organizations to tie the end user experience with the ‘back-end’ data center applications infrastructure. This can be incredibly powerful as it allows a full end-to-end view of the entire application interaction from mouse click to database record retrieval.

So what capabilities should be expected from an end user applications monitoring solution? It’s certainly more than has been available for years, which is typically a combination of synthetic transaction monitoring, desktop management and end user issues opened at the service desk. EUAM provides real end user activity in one tool. Depending on how intrusive a company needs to go (or allowed to go) the following EUAM capabilities should be considered when looking for the right EUAM product;

  • Real-Time Application Response Monitoring
    • Information in real-time revealing degradations in applications performance preferably before the end user sees the impact.
  • End User Behavioral Analysis
    • Information on how end users access applications, when the applications are used and even where the access is required.
  • Visibility through service providers, clouds and content delivery networks
    • End-to-end visibility of applications performance irrespective of where the applications are sourced and the cloud environments between the source and the end user.
  • Application Activations
    • Visibility into when and how long applications are used enabling IT support to schedule and plan IT operational activity more effectively.
  • Keystroke/Activity Logging
    • Increasing root-cause capabilities by allowing IT support to see what was happening on the end user device when an issue occurred.
  • Device Information (type, software revisions, configuration)
    • Ensuring that the end user has the required configuration to support effective application release processes and allowing more effective issue identification.
  • End user location (*if company policy and/or privacy laws allow)
    • Allows IT to track where end users access applications and on what devices. This helps with performance degradation issue analysis and root-cause analysis.

Recent coverage of EUAM;

http://apmdigest.com/end-user-experience-application-performance-management-bmc

http://www.bmc.com/products/euem/end-user-experience.html?intcmp=redirect_product-listing_end-user-experience

http://www.businesswire.com/news/home/20121029005618/en/Aternity®-User-Activity-Monitoring-Windows®-8

 

where do you go when IT gets in your way?

When IT gets in the way of doing your job where do you go for help? The service desk? Someone in IT operations? Google? Phone a friend?  Big problems such as an application outages or the most common password issues are typically covered. These are certainly an inconvenience but they either get the right level of attention or are easily to fix.

But what about poor performance when getting mail on your phone at the airport, getting access to a printer, or an inability to connect to wi-fi in a company facility?  These situations can be temporary but have no obvious path of remediation and they can totally ruin your day. It’s the small stuff that is the hardest to deal with. Most corporate IT organizations are ill-prepared to deal with this level of end user interaction and the end user is hesitant to send an email into the help desk or spend 30 minutes queuing to talk to someone for what is considered a trivial low priority problem.

In the world of private IT use there is no central help desk or an IT department however people have learned to deal with issues.  The option to send a complaint or send a problem description to someone believed to be at blame is always available, with mixed results, some solving the issue, some not and some ignored.  Then there is search.  Someone, somewhere must have had a similar problem. This approach works even though it may not provide the exact answer it will at least send you to interest groups with any number of smart people willing to provide guidance.  For a large number of reasons (e.g. company regulations) this type of activity is not something a business would readily adopt internally.

Managing issues through crowd sourcing.

Applications have been available for years which allow people to comment on services, products, restaurants, etc. Recently this capability has taken a real-time aspect where guidance can be provided through experience and observation. An example of this is the ‘human’ GPS, Waze (http://www.waze.com).   For the few people on the planet who have not heard of this application it allows drivers to share information on their mobile devices in real-time on traffic conditions (jams, police speed traps, accidents etc). This provides road condition awareness and allows the application to find you alternative routes.

Now, imagine using a version of this in business for IT.  Going back to an example I gave at the beginning; you are at the airport trying to get email on your phone and it’s not going well.  This can make you feel a bit of a victim and make you think – Is this problem something temporary? Is it just me? Have I done something stupid? Has someone else? However, what if you had an application that showed you your applications status, allowed you to see if anyone else is having similar issues, allowing you to immediately know if its a general email problem, a location problem or a device problem and if there are any workarounds. It would also allow you to see if the problem has been reported, report it yourself, add yourself to the problem list and track the problem. For the user it provides awareness and possible fixes. For the IT support organization it provides the ability to understand who is being affected, where they are and what they are using.  With so many applications being sourced outside the datacenter, the avalanche of mobile devices used for business and people constantly moving around while still trying to remain working the only way to help the end user help themselves is through crowd-sourcing applications augmenting the businesses IT operations management tools.

Power to the people.

why IT cannot ignore the end-user

Until recently visibility into the end user world was not considered essential when measuring the availability of IT service. It was assumed that focusing on datacenter metrics provided enough information to show how effectively IT supported the business. For most IT organizations the lens on the business remains the metrics provided by the service desk. From the perspective of issues this may be acceptable but it hardly represents how the business is using IT. It would be like asking a doctor “so, how healthy does the world look today?”

This whole situation has been exacerbated by the use of mobile devices and the growth in non-corporate cloud-based application sources.  So how does an IT department understand how the business is experiencing IT when it no longer has the luxury of concentrating its attention on the corporate data center? As of yet, there isn’t a consensus of opinion on how to address this situation leaving most to continue to look to their legacy IT infrastructure monitoring tools (see Infrastructure monitoring. How relevant is it?) supplemented with network performance tools and/or APM tools.

If the objective is to understand how IT is used and experienced then you you don’t start from the data center. The starting place is the end user. This requires more than a set of tools giving visibility from ‘the edge’ it will require IT support to organize and focus teams on end user activity.  Measuring experience means understanding how IT is used, when it is used and where it is used and not just when it is an issue. Capturing and analyzing this content allows IT organizations to assess the true business impact of IT irrespective of where the user is, what they are using or where their applications are sourced.

This approach is not going to be an easy for IT departments that have spent decades focusing on silo’d datacenter elements and back-end applications transactions. However, end user activity monitoring is not an option. Users do not use one device, do not remain in one place and do not use just one application. IT innovation, mobility and end user creativity will continue to push the limits of IT operations management with those able to adjust their IT management focus benefitting from greater IT decision making and business alignment.

Those that don’t will be left struggling trying to manage increasingly diverse IT needs using tools providing on a datacenter centric, application performance snapshot stumbling their way towards the edge through trying to see through increasingly complex third-party service black-holes.