By Michael Halperin
So there’s a problem in your IT infrastructure. Maybe a server crashed. Maybe a firewall just went down. Maybe a network segment is completely jammed up with traffic. Or it could simply be a maintenance window to reconfigure a device. The list of potential events is endless. But they all lead to one question:
Who cares?
Now, I don’t ask that question from that flippant, rhetorical perspective with the implication that something doesn’t matter. Rather, I ask it quite literally. Why does it matter? Who cares – or more succinctly, who is impacted by this event?
Too often, IT’s answer is “Who knows?” (and now I DO ask that question from that flippant, rhetorical perspective) because IT often has no idea who cares, and why.
Now, that isn’t to say IT doesn’t care about its users. But IT typically doesn’t – or more accurately, can’t – understand its user activity in a concise, specific, real-time way. At best, IT has an anecdotal idea of what users are doing at any given point in time, and is forced to make an educated guess as to who is impacted by what event.
In today’s IT world, Lines of Business are demanding more and more of IT. Moreover, users – who are savvy enough to manage their own user experience with myriad personal devices and social media channels – expect flawless execution of technology, whenever and wherever they want it. This “Personalization of IT” concept is in rapid transition from ideal to expectation. This is where Quality of Experience Management comes in.
Imagine a world where an issue in the IT environment includes a warning to users that they are likely to encounter issues – before those issues become noticeable. Or better yet, where IT is able to provide those users with simple steps they can take to sidestep an emerging issue. Or better still, where IT themselves can redirect user activity to maintain a good quality of experience while those users never even know there ever was an issue at all.
And best of all, what if IT could monitor and evaluate the ongoing performance of the infrastructure over time, identifying those critical points in the infrastructure where and when potential issues are most likely to occur in the future? And what if they could then evaluate the potential impact of such events to determine which of those hot-spots would have the greatest impact on users? That would allow IT to leverage virtual and cloud technologies to provision contingency paths to prevent user-impacting events from occurring in the first place. The result would be a highly stable, high-performance environment. And a bunch of very happy users!
An emerging idea in IT management is about to make all of this possible. The idea is Quality of Experience Management.
In our next installment, we’ll explore the difference between traditional IT monitoring and Quality of Experience (QoE) Management. We’ll find that the operational differences between traditional monitoring and the monitoring required to enable QoE Management really aren’t that significant. But we’ll see that by adding just one more ingredient to the recipe, we can have a profound impact on the business.
In the meantime, if you're looking for more information, check out my article "Why Managed Services Make Sense for Traditional IT and the Cloud"