Part 1 of a 2 Part Series
It’s a well-known fact that most IT systems are operated as silos of activity where separate departments or divisions are each responsible for their own events/outcomes and run with their own sets of rules, roles and responsibilities. Operations, engineering, application development and so on; they all have their own way of doing things and basically support the goals and objectives of the business. Where those silos, or domains, overlap however, there are hand-off points…I call them holes… where people are plugged in (and are used to bridge the gaps). These people do things like approve provisioning requests, use tools to measure capacity, transfer data from one application to another, install software, etc. Plugging people in to these holes is an easy fix and usually seen as a stopgap measure until the organization comes up with a long term plan…that somehow never arrives…regardless of the fact that people are fallible and prone to make mistakes, forget process steps, miss deadlines…essentially, are human.
So while the immediate problems, the holes, seem to be fixed by throwing people at them, longer term complications are created and those can be much worse than the original problems. Because the inevitable human errors are pushed away from the original area of concern, they don’t seem to be problems at all so they get no notice and just continue to happen in the background; slowing systems, creating rework or, in some cases, altogether halting business critical systems from performing their intended functions.
To combat these types of problems, solutions have been created that effectively remove people from the equation—automation and orchestration—which significantly speeds up and standardizes processes. While these are two different solution types, they are so indelibly connected to each other that they might as well be just two perspectives of the same solution. Briefly, automation is the modeling of an IT task or activity (a workflow) within a software system so that that task or activity can be carried out repeatedly and exactly the same each and every time. Orchestration is simply the chaining together of multiple automation workflows in order to deliver a business outcome or benefit. An early example of this was BizTalk Server from Microsoft but now there are many such solutions available from vendors such as CA Technologies, BMC, IBM/Tivoli, etc. While automation and orchestration have been around for many years (significantly in the mainframe space), they have recently matured within the distributed computing space to the point where they have caught up to the many advances at the hardware level and are now able to work within the enterprise to efficiently and effectively automate hundreds and in some cases thousands of separate processes. Basically, they plug the holes between systems with software, not people, which has significantly increased the speed with which these systems deliver the business benefits they were designed for.
These benefits, however, have not come without a price. Typically, as stated above, traditional datacenter processes have evolved over time in sync with the progress made at the physical level—meaning that “people operating equipment” was the major constraint on the systems that were created—and the systems reflected this and were optimized for that fact. As hardware and software systems evolved and were improved and became faster and more able to handle increased loads with greater efficiency—and increasingly automation and orchestration systems were utilized—the operations of those faster systems were causing bottlenecks, in some cases severe enough to affect critical business systems. While the operational and procedural systems attempted to keep up, they just couldn’t because they were still generally based on the traditional datacenter approach, which is plugging people into the holes in the system. The idea of doing more with less has become the norm and has put great pressure on the personnel running the systems…the holes they plug are multiplying.
With the advent of sophisticated software systems (such as automation and orchestration) that are able to accurately replicate processes over and over, the operational system must evolve as well so that it is not the chief limiting factor in an organization’s IT systems. In a well-defined and designed cloud infrastructure, for example, the previous constraints of humans doing activities—such as installing an operating system—no longer exist because of automation and orchestration. Therefore, the operational system itself must be redesigned in order to take advantage of the newly enhanced capabilities. Basically, the way that the systems are operated—the roles and responsibilities as well as the policies, processes and procedures—must evolve with the systems (in a perfect world they would evolve ahead of the systems) so that neither becomes a hindrance to the other…and so that they work in concert to deliver the expected business benefit of the system as a whole.
Next, Part 2: While You’re At It, Fixing the People…