GreenPages Blog

As an IT professional, you need to stay current on all things tech; with articles from industry experts and GreenPages' staff, you get the info you need to help your organization compete and succeed!

Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

Posted by: Trevor Williamson
Read More
All Posts

Automation and Orchestration: Why What You Think You’re Doing is Less Than Half of What You’re Really Doing

One of the main requirements of the cloud is that most—if not all—of the commodity IT activities in your data center need to be automated (i.e. translated into a workflow) and then those singular workflows strung together (i.e. orchestrated) into a value chain of events that delivers a business benefit. An example of the orchestration of a series of commodity IT activities is the commissioning of a new composite application (an affinitive collection of assets—virtual machines—that represent web, application and database servers as well as the OSes and software stacks and other infrastructure components required) within the environment. The outcome of this commissioning is a business benefit whereas a developer can now use those assets to create an application for either producing revenue, decreasing costs or for managing existing infrastructure better (the holy trinity of business benefits).

When you start to look at what it means to automate and orchestrate a process such as the one mentioned above, you will start to see what I mean by “what you think you’re doing is less than half of what you’re really doing.” Hmm, that may be more confusing than explanatory so let me reset by first explaining the generalized process for turning a series of commodity IT activities into a workflow and by turn, an orchestration and then I think you’ll better see what I mean. We’ll use the example from above as the basis for the illustration.

The first and foremost thing you need to do before you create any workflow (and orchestration) is that you have to pick a reasonably encapsulated process to model and transform (this is where you will find the complexity that you don’t know about…more on that in a bit). What I mean by “reasonably encapsulated” is that there are literally thousands of processes, dependent and independent, going on in your environment right now and based on how you describe them, a single process could be either A) a very large collection of very short process steps, or, Z) a very small collection of very large process steps (and all letters in between). A reasonably encapsulated process is somewhere on the A side of the spectrum but not so far over that there is little to no recognizable business benefit resulting from it.

So, once you’ve picked the process that you want to model (in the world of automation, modeling is what you do before you get to do anything useful ;) ) you then need to analyze all of the processes steps required to get you from “not done” to “done”…and this is where you will find the complexity you didn’t know existed. From our example above I can dive into the physical process steps (hundreds, by the way) that you’re well aware of, but you already know those so it makes no sense to. Instead, I’ll highlight some areas of the process that you might not have thought about.

Aside from the SOPs, the run books and build plans you have for the various IT assets you employ in your environment, there is probably twice that much “required” information that resides in places not easily reached by a systematic search of your various repositories. Those information sources and locations are called “people,” and they likely hold over half of the required information for building out the assets you use, in our example, the composite application. Automating the process steps that are manifested in those locations only is problematic (to say the least), if not for the fact that we haven’t quite solved the direct computer-to-brain interface, but for the fact that it is difficult to get an answer to a question we don’t yet know how to ask.

Well, I should amend that to say “we don’t yet know how to ask efficiently” because we do ask similar questions all the time, but in most cases without context, so the people being asked seldom can answer, at least not completely. If you ask someone how they do their job, or even a small portion of their job, you will likely get a blank stare for a while before they start in how they arrive at 8:45 AM and get a cup of coffee before they start looking at email…well you get the picture. Without context, people rarely can give an answer because they have far too many variables to sort through (what they think you’re asking, what they want you to be asking, why you are asking, who you are, what that blonde in accounting is doing Friday…) before they can even start answering. Now if you give someone a listing or scenario in which they can relate (when do you commission this type of composite application, based on this list of system activities and tools?) they can absolutely tell you what they do and don’t do from the list.

So context is key to efficiently gaining the right amount of information that is related to the subject chain of activities that you are endeavoring to model- but what happens when (and this actually applies to most cases) there is no ready context in which to frame the question? Well, it is then called observation, either self or external, where all process steps are documented and compiled. Obviously this is labor intensive and time inefficient, but unfortunately it is the reality because probably less than 50% of systems are documented or have recorded procedures for how they are defined, created, managed and operated…instead relying on institutional knowledge and processes passed from person to person.

The process steps in your people’s heads, the ones that you don’t know about—the ones that you can’t get from a system search of your repositories—are the ones that will take most of the time documenting, which is my point, (“what you think you’re doing is less than half of what you’re really doing”) and where a lot of your automation and orchestration efforts will be focused, at least initially.

That’s not to say that you shouldn’t automate and orchestrate your environment—you absolutely should—just that you need to be aware that this is the reality and you need to plan for it and not get discouraged on your journey to the cloud.

Comments

Related Posts

Tech News Recap for the Week of 07/15/19

If you had a busy week in the office and need to catch up, here’s our recap of tech articles you may have missed the week of 07/15/19!

Tech News Recap for the Week of 07/08/19

If you had a busy week in the office and need to catch up, here’s our recap of tech articles you may have missed the week of 07/08/19!

What Was Great in 08' Now Needs an EOS Update

Back in 2008 I still had a faceplate for my car radio, Bleeding Love by Leona Lewis was crushing the pop charts, and organic bean sprouted bread was something you’d find in the pet food aisle. It’s also the year Microsoft released Windows 2008 and SQL 2008, leaving a lasting impression like a tune you can’t get out of your head. For Windows Server 2008, it was the first Windows edition that allowed you to license for virtualization. If you recall, there used to be an Enterprise Edition of Windows 2008 that allowed for 4 VMs and if you needed 12 VMs you had to purchase 3 licenses. Datacenter provided unlimited VMs, and Standard edition both covered standalone and virtual machines.  At the time Microsoft was really making us work to understand the minutia of their licensing rules. Thank goodness Microsoft’s licensing has gotten a lot easier to understand (insert sarcasm.) Windows 2008 and 2008 R2 and SQL 2008 and 2008 R2 had a good run, and like all good things, including Leona Lewis’s career, it will be coming to an end. SQL 2008 and 2008 R2 End of Support (EOS) is July 9, 2019. Windows 2008 and 2008 R2 EOS is January 14, 2020.  Once Microsoft products go EOS, Microsoft offers ZERO support for the product, meaning they’ll no longer provide updates and patching. With no support, it would leave the product vulnerable to security threats because no fixes will be available to prevent infiltration. Security updates are mission critical. In 2016, 4.2 Billion records were stolen by hackers. Twenty percent of organizations lose customers during an attack and 30% of organizations lose revenue during an attach. Not fun!  It would be like if John Rambo retired and stopped drawing blood, which is a bad analogy because Rambo: Last Blood is being released in September. This begs that question, is this really the Last Blood? Probably not, however you can be certain the Microsoft’s “Last Blood” is actually happening. So what to do when your support goes away? Well you’ll need to think about modernizing and in this case adopting cloud. It’s a good time to seize EOS as an opportunity to transform with Microsoft’s latest technologies. A jump to Azure will allow you to migrate your Windows 2008 and 2008 R2 workloads to Azure VM or Azure SQL Database. Customers who move 2008 and 2008 R2 workloads to Azure Virtual Machines (IaaS) “as-is” will have access to Extended Security Updates for both SQL Server and Windows Server 2008 and 2008 R2 for three years after the End of Support dates for free. Those that decide to move to Azure SQL Database Managed Instance (PaaS) will have access to continuous security updates, as this is a fully managed solution. Or you could stay with on-premises licensing and upgrade to Windows 2019 or SQL Server 2017 by leveraging your Software Assurance benefits to modernize on-premises or on Azure (i.e. Azure Hybrid Benefit), to help reduce security risks and continue to get regular security updates. Regardless of what investment you decide to make, GreenPages can help right-size you for the future and ensure your data continues to be protected. To have further conversations about Windows 2008 and 2008 R2 and SQL 2008 and 2008 R2, please connect with your Account Executive or reach out to us!