Author Archives: Opsleuth

community vs. the analysts

opsleuth commentWill there ever be a time when collaborative, organized social community content relegates the analyst research to just another set of input where the quality of community content is of equal or greater value than that provided by analysts? After all, analysts are not trained as analysts before they are hired. Most were IT professionals in IT organizations and vendors. However, putting on a leading analyst company badge transforms a person from someone with an opinion to someone who is paid to advise and guide IT executives on technology and business decisions. I do believe in the value of analyst information however I also believe there are many social community contributors who have equally valid or better credentials to provide similar or better guidance or advice.

So what is it going to take for these two information sources to be treated equally?  The main problem is social community content is in many locations with many contributors. Boiling down social data can be a mammoth effort resulting in no definitive answer.  Analyst companies typically take a position on any specific subject with limited alternative views. So even though the community continues to influence IT decisions, most executives will still take direction from the paid analyst services and it is analyst company input and research that will be referenced to create vendor shortlists, justify investments and used to support project proposals/decisions.

The following table compares social community and analyst content, influence and contribution.

Content  (comments, white papers etc) is where most of the differentiation occurs. Social content can vary wildly (structure, format, quality, terminology) which can make the job of finding specific answers/statements on subjects difficult. Analyst content typically conforms to a standard format making information easier to find (also helped by a lot less content).

Content Update Frequency (how often new content is released). Social content is released and updated and commented on every second of every day and takes many forms blogs, tweets, community site comments etc. Analyst content is released in chunks (~15-30 papers per year per analyst) and if you subscribe to their services you may also get access to the analyst through scheduled phone calls or email. As change in the IT industry occurs at an increasingly alarming rate the social community will respond immediately whereas the only way to get an immediate view from an analyst will require a phone call or email (or hope they keep their blog up-to-date).

Content Output Types (content deliver methods). Social content is delivered in every form possible (tweets, blogs, mail, video etc) while analyst content has historically been in three forms (blogs, research documents and presentations).  However a change is happening in the analyst community with some analysts having their own blogs and starting to participate in social site conversations. Giving away any detail, no matter how abstract, is a problem for analyst firms who associate every word uttered by their analysts with a costed value. It’s a balance of giving away ‘just enough’ to entice readers and followers to their pay to play services.

Content Inputs (feedback, comments)  Social content is, err social. Blog entries, tweets etc all allow/promote feedback and general comments. A blog entry may create a conversation stream where the content is augmented or create a debate spawning alternative views and opinions. This type of real-time content is very healthy as it allows the reader to make their own assumptions based on how information is described, promoted, defended or dismissed.  This social interaction is rarely embraced by analyst firms. Analyst content is not up for debate – it is their position and if you dare have another one – you are wrong.  I always smile when I hear someone say “I am going to get the analyst to change their mind”.  Analyst companies may ask for input to aid their research, however they do not seek alternative views for their output.

Bias (prejudice in favor of something which maybe considered unfair). Social content bias can be found everywhere and it’s usually blindingly obvious. Arguably everyone has a view (bias) and if a social site or individual has similar views and beliefs to yours then you are more likely to follow them, read their blogs, view their videos and buy their books. Following someone who you rarely agree with becomes too frustrating and only the young have the time (and energy) to continally argue in public forums.  There is an IT myth: Analysts don’t have bias.  They do because analysts are human. The bias doesn’t have to be towards a vendor or product but towards a belief system or a hidden agenda or goal. Understanding an analysts ‘preference’ may not be easy to find as it will be hidden beneath a thick layer of impartiality.

Cost. Social content is free. Analyst content isn’t.

Decision to Participate (who chooses IT market information sources). Anyone can choose to be in a social community only companies choose to use analyst services.

Decision strength (the contents influence and ‘weight’). Social content can be taken into consideration when making an IT decision but it is rarely used to justify an investment or make a change. Social content is used to influence and build awareness. Analyst content is often used to prove a point, show a trend, justify an investment or identify IT purchase options.

Content Ownership and References (identifying content authors, contributors and references). Social content can be tracked to the author, content providers etc. Few would expect to build credibility by including “a big New York bank told me…” to support their point.  The social community want names and addresses and if you cannot provide them then your comments are dismissed.  The analyst community often use generic, non-specific references to give their research and assumptions credibility and weight. Whether the reference is real or not is something protected by ‘client confidentiality’  - so we are left to take their word for it.

Followers/Subscribers.  Social communities can have many thousands of readers and contributors spread across hundreds of community sites. There are only a few analyst companies that can claim 1000′s of subscribers.

Content Update Awareness. The speed new information is delivered drives how often followers/subscribers visit the community sites. Leading social sites are visited very frequently, daily in most cases with update alerts delivered through social messaging. Social sites are not standalone entities and can reference other sites creating a social content web.  Analyst content updates are far less frequent resulting in subscribers visiting the site only when an email alert is sent or when a project or decision requires analyst content.

Confidentiality. IT organizations seeking generic, non-specific information can find usually find something in community sites. The problem is that many company IT challenges are confidential and require a high-degree of confidentiality. When this is the case the social environment is hardly the place to air your company’s business problems. Analyst companies protect their client confidentiality allowing highly-confidential issues to be discussed and addressed.

So what does all this mean?

Basically there is far more content in social communities, updated in real-time, used on a very frequent basis, anyone can participate and it’s free. Analyst content is delivered far slower, used when needed and it’s high cost.  Social content is used as reference with little privacy resulting in it being used as information providing peripheral advice and guidance that influences IT decisions. Analyst content and interaction is protected with client confidentiality agreements and is used to justify and make decisions.

For social communities to start having an impact on business analyst services requires a number of changes. Social sites will need to focus more, filter their content better and provide ways to protect privacy.  This is not what most were designed to do and I’d argue it would take the edge off the value they provide.   So, the time has not yet arrived where analyst firms are under threat from the social community – there is simply too much diverse content and no confidentiality. Companies want guidance, advice and answers to specific questions. They do not  have the time to read and filter a mountain of conflicting information. However, social sites such as servicesphere.com have already attracted a very large and loyal following due to the quality of the content, the ways the information is delivered (and made consumable), the credibility of key contributors and the relevance of the material.

Analyst firms are aware of the content tsunami on the horizon that will start to erode their core research values. Analysts are already starting to be seen outside the strict confines of their employers websites providing opinions without charging for them. This will continue to be a challenge as they try to balance what to give away and what not to.  The art form appears to be saying just enough to create interest and then direct followers and readers to the costed content.

Another change that may occur soon is that companies will see the need to search, filter and analyze social content making it useable to aid business decisions.  Today, companies have people focused on PR (Public Relations) and AR (Analyst Relations). Maybe to embrace and leverage the wealth of content in the social community will require companies to also invest in SR (Social Relations).

 

end user activity monitoring (EUAM)

The focus on end user activity monitoring (EUAM) continues grow in importance due to the end user influence on how IT is used and how applications are sourced. Forward thinking IT organizations recognize the value of gaining insight into end user activity as it can enable more effective applications support, improved IT service and ensures end users are more productive.

However, EUAM not something readily embraced by end users who consider anything that tracks and monitors their activity a personal infringement.  In some countries privacy hurdles will need to be overcome requiring the tools that provide activity monitoring to establish levels of interaction.

Highway cameras are used to take pictures of cars breaking the speed limit policy with all vehicles under the limit passing with no picture and no record of their presence.  The same can be said for EUAM, the objective is to identify trends, abnormalities and performance degradations not to track and record all activity.  Today’s EUAM tools will understand what devices are used, the configuration, specified software loaded, application performance, activations and where appropriate, the location. However, unlike the speed camera the job of EUAM is focused on enablement not policing.

EUAM augments and enhances existing application performance monitoring tools, by providing a ‘front-end’ end user understanding of how IT is being experienced.  It allows IT organizations to tie the end user experience with the ‘back-end’ data center applications infrastructure. This can be incredibly powerful as it allows a full end-to-end view of the entire application interaction from mouse click to database record retrieval.

So what capabilities should be expected from an end user applications monitoring solution? It’s certainly more than has been available for years, which is typically a combination of synthetic transaction monitoring, desktop management and end user issues opened at the service desk. EUAM provides real end user activity in one tool. Depending on how intrusive a company needs to go (or allowed to go) the following EUAM capabilities should be considered when looking for the right EUAM product;

  • Real-Time Application Response Monitoring
    • Information in real-time revealing degradations in applications performance preferably before the end user sees the impact.
  • End User Behavioral Analysis
    • Information on how end users access applications, when the applications are used and even where the access is required.
  • Visibility through service providers, clouds and content delivery networks
    • End-to-end visibility of applications performance irrespective of where the applications are sourced and the cloud environments between the source and the end user.
  • Application Activations
    • Visibility into when and how long applications are used enabling IT support to schedule and plan IT operational activity more effectively.
  • Keystroke/Activity Logging
    • Increasing root-cause capabilities by allowing IT support to see what was happening on the end user device when an issue occurred.
  • Device Information (type, software revisions, configuration)
    • Ensuring that the end user has the required configuration to support effective application release processes and allowing more effective issue identification.
  • End user location (*if company policy and/or privacy laws allow)
    • Allows IT to track where end users access applications and on what devices. This helps with performance degradation issue analysis and root-cause analysis.

Recent coverage of EUAM;

http://apmdigest.com/end-user-experience-application-performance-management-bmc

http://www.bmc.com/products/euem/end-user-experience.html?intcmp=redirect_product-listing_end-user-experience

http://www.businesswire.com/news/home/20121029005618/en/Aternity®-User-Activity-Monitoring-Windows®-8

 

where do you go when IT gets in your way?

When IT gets in the way of doing your job where do you go for help? The service desk? Someone in IT operations? Google? Phone a friend?  Big problems such as an application outages or the most common password issues are typically covered. These are certainly an inconvenience but they either get the right level of attention or are easily to fix.

But what about poor performance when getting mail on your phone at the airport, getting access to a printer, or an inability to connect to wi-fi in a company facility?  These situations can be temporary but have no obvious path of remediation and they can totally ruin your day. It’s the small stuff that is the hardest to deal with. Most corporate IT organizations are ill-prepared to deal with this level of end user interaction and the end user is hesitant to send an email into the help desk or spend 30 minutes queuing to talk to someone for what is considered a trivial low priority problem.

In the world of private IT use there is no central help desk or an IT department however people have learned to deal with issues.  The option to send a complaint or send a problem description to someone believed to be at blame is always available, with mixed results, some solving the issue, some not and some ignored.  Then there is search.  Someone, somewhere must have had a similar problem. This approach works even though it may not provide the exact answer it will at least send you to interest groups with any number of smart people willing to provide guidance.  For a large number of reasons (e.g. company regulations) this type of activity is not something a business would readily adopt internally.

Managing issues through crowd sourcing.

Applications have been available for years which allow people to comment on services, products, restaurants, etc. Recently this capability has taken a real-time aspect where guidance can be provided through experience and observation. An example of this is the ‘human’ GPS, Waze (http://www.waze.com).   For the few people on the planet who have not heard of this application it allows drivers to share information on their mobile devices in real-time on traffic conditions (jams, police speed traps, accidents etc). This provides road condition awareness and allows the application to find you alternative routes.

Now, imagine using a version of this in business for IT.  Going back to an example I gave at the beginning; you are at the airport trying to get email on your phone and it’s not going well.  This can make you feel a bit of a victim and make you think – Is this problem something temporary? Is it just me? Have I done something stupid? Has someone else? However, what if you had an application that showed you your applications status, allowed you to see if anyone else is having similar issues, allowing you to immediately know if its a general email problem, a location problem or a device problem and if there are any workarounds. It would also allow you to see if the problem has been reported, report it yourself, add yourself to the problem list and track the problem. For the user it provides awareness and possible fixes. For the IT support organization it provides the ability to understand who is being affected, where they are and what they are using.  With so many applications being sourced outside the datacenter, the avalanche of mobile devices used for business and people constantly moving around while still trying to remain working the only way to help the end user help themselves is through crowd-sourcing applications augmenting the businesses IT operations management tools.

Power to the people.

why IT cannot ignore the end-user

Until recently visibility into the end user world was not considered essential when measuring the availability of IT service. It was assumed that focusing on datacenter metrics provided enough information to show how effectively IT supported the business. For most IT organizations the lens on the business remains the metrics provided by the service desk. From the perspective of issues this may be acceptable but it hardly represents how the business is using IT. It would be like asking a doctor “so, how healthy does the world look today?”

This whole situation has been exacerbated by the use of mobile devices and the growth in non-corporate cloud-based application sources.  So how does an IT department understand how the business is experiencing IT when it no longer has the luxury of concentrating its attention on the corporate data center? As of yet, there isn’t a consensus of opinion on how to address this situation leaving most to continue to look to their legacy IT infrastructure monitoring tools (see Infrastructure monitoring. How relevant is it?) supplemented with network performance tools and/or APM tools.

If the objective is to understand how IT is used and experienced then you you don’t start from the data center. The starting place is the end user. This requires more than a set of tools giving visibility from ‘the edge’ it will require IT support to organize and focus teams on end user activity.  Measuring experience means understanding how IT is used, when it is used and where it is used and not just when it is an issue. Capturing and analyzing this content allows IT organizations to assess the true business impact of IT irrespective of where the user is, what they are using or where their applications are sourced.

This approach is not going to be an easy for IT departments that have spent decades focusing on silo’d datacenter elements and back-end applications transactions. However, end user activity monitoring is not an option. Users do not use one device, do not remain in one place and do not use just one application. IT innovation, mobility and end user creativity will continue to push the limits of IT operations management with those able to adjust their IT management focus benefitting from greater IT decision making and business alignment.

Those that don’t will be left struggling trying to manage increasingly diverse IT needs using tools providing on a datacenter centric, application performance snapshot stumbling their way towards the edge through trying to see through increasingly complex third-party service black-holes.

Welcome to Opsleuth

Having spent many years writing documentation for software vendors an analyst company and contributing to other peoples’ blogs I felt the need to get off the fence and create my own website.  The content will be on a number of topics in IT operations management including automation, every type of product that goes red, green and yellow and the increasing influence of the end user.