Post #1 of 3OVERALL INTRODUCTION TO THE SERIES – This is the first in a series of three (3) related Posts we’ve written to discuss the current state of Information Technology (IT) Infrastructure, where it’s headed, and how you can put yourself in the best position moving forward. The first Post discusses how we arrived at this point in time with regards IT infrastructure. The second Post will discuss where you may be trying to take your IT infrastructure in the future and why. The third and final Post will speak candidly about the Hybrid Cloud, and how you can get there as quickly and as easily as possible. These Posts are written in a candid, no holds barred approach; some might say blunt. We’ve been in IT infrastructure for a long time, understand it well from a historical perspective, vendor’s perspective, and a business perspective, and we’re intent on sharing observations about how we see IT organizations make decisions; good and bad, with suggestions for improvement. Let us introduce ourselves. I’m Tim Joyce, and I run a VAR called Roundstone Solutions. I’m based in Northern CA and have run several national resellers, worked for two large manufacturers, a large national IT leasing company, and owned several companies in the IT infrastructure space. He’s Joe Joyce, my identical twin, who runs Roundstone Solutions in the East, out of the NY/NJ area. Joe has been in IT for most of his career, in sales and management roles for several VARs, a large IT leasing company, and his own company. Full disclosure…we’ve worked on the sales side of IT infrastructure for manufacturers, VARs, and IT leasing companies our entire careers. We know what vendors have been telling IT executives to get them to purchase their products, and we know how vendors look to develop a sense of trust and belonging with their customers. None of this is bad, until this sense of trust and belonging to a vendor prevents IT executives from moving their company forward to a newer and better platform. Let’s get started… POST #1: HOW DID WE GET HERE?The purpose of this Blog Post is to have a candid discussion about the state of IT infrastructure as we move through 2020 and beyond. We’ve been students of the Information Technology (IT) business since getting our start in the business in the 1980s, when the IT business itself was only about 20 years old. Since that time, we’ve been involved in every trend and concept that has come about in IT, including mainframes, personal computers, internetworking, Client/Server, virtualization, 3-Tier infrastructure, and now the Public Cloud. Every concept listed above was at one time the “shiny new object” everyone wanted to play with. Over time, many of those concepts became the accepted way of doing IT infrastructure, until something new/better came along. As we begin, we ask the question…if you were to start a brand-new IT infrastructure environment today, what would it look like? Let’s assume you have no existing equipment, software, vendor relationships, etc. You get to start with a blank piece of paper. What would you do? Hint…you’d start by thinking about the applications your business needs to run and try and figure out the best way to do that. Don’t just answer, “I’d move everything to the Public Cloud” because then we’d know you’re not really thinking clearly. Since the beginning of IT, smart people have been saying that “it’s not about the hardware, it’s about the software.” Well, being guys who made their livings selling IT hardware, we paid software no mind. We figured it was important to be expert on the hardware side so software could be run efficiently, and we were right. For a time, that is. But things shifted some time ago. In the past 10 years, IT hardware has become less specialized and more commodity based. Years ago, IBM was the only real game in town, unless you were in the Science, Education or Public Sector spaces, and then you probably used Digital Equipment Corporation (DEC) equipment. But these days, specialized hardware is something IT executives try to stay away from (rightly so). These days, the IT hardware discussion centers around using commodity compute, storage, networking, and other devices. It’s less expensive, and everyone uses the same types of components. Differentiation with hardware is difficult, and yields minor, if any, improvements to your organization. With IT hardware being standardized/commoditized, the focus has rightly been put on software. There are several types of software, including operating system software, database software, middleware, and software that controls the various hardware devices (generally called firmware). But the most important software, and the sole reason for an IT infrastructure itself, is the Application software. Application software is what the phrase “it’s about the software” is referring to. Application software is what runs your business, and it’s what’s most important. Let’s acknowledge once and for all…you don’t create an IT infrastructure for the sake of having one, you create it to run Applications your business needs to differentiate itself and become successful in its chosen market. Period. HOW WE GOT HERE Please indulge us in a short history review about IT infrastructure in the past, say, 25 years. In the mid to late 90’s, there was panic amongst IT professionals that everyone’s computers would stop working when the clock struck midnight on January 1, 2000. The reason for this was that, since the beginning of the computer industry, computers and software alike were written with a 2-digit date code in most of the firmware, microcode, and software installed worldwide. When the clock moved from 99 (for 1999) to 00 (for 2000), the fear was that the computer would see things as 1900, not 2000. This fear was real, and IT professionals set out to fix things so that disaster could be avoided that New Year’s Day. This is commonly referred to Y2K. Many of you probably suffered through a miserable New Year’s holiday that year, as did we. From about 1995 through the end of 1999, companies embarked on a mission to update all of the hardware and software that was installed at the time, to avoid the date issue. But replacing systems just because of a date change seemed like a long way to go for not a lot of reward. There was a lot of work that got done, and money that was spent, for what seemed like a dumb problem. Which it was, but there was no other choice. Companies decided that if they were going to have to go through the time, effort, and money of replacing all of these systems, they were going to come out on the other side of the effort with a better IT infrastructure. Companies rushed to deploy new distributed systems based on an operating system called UNIX. New hardware based on running the UNIX operating system and new software that ran on top of UNIX provided users a whole lot more flexibility than the monolithic mainframe systems of the time. Some of the more well-known server vendors at the time were Sun and HP, and to an increasingly lesser degree, IBM. A widespread migration from mainframe to UNIX hardware began, and with it, a software migration occurred as well. Companies started moving from custom created applications to UNIX-based Enterprise Resource Planning (ERP) application software; this was all the rage at the time. SAP, Oracle, and BAAN were some of the leaders in ERP software. ERP software promised the ability to have one set of software to run your entire business (that was mostly marketing hype). Most important, ERP software was written with 4-digit date codes, allowing systems to recognize 2000 correctly. So, along with moving to new hardware, you could also move to new software that promised to improve everything. I can tell you that lives did improve for those who were selling hardware and software! It was a modern-day Gold Rush. Those were good times for sellers… In the early part of the 2000s, companies realized they had bought way too many systems in their haste to modernize and avoid the Y2K issue. These systems were compartmentalized, and rarely shared. For each application that needed to be modernized, companies would often purchase multiple systems; one for Production, one for Test, one for Development, one for Disaster Recovery, and so on. We used to wonder why companies didn’t share systems for more than one application workload, but they rarely did. We found that odd, since in the previous mainframe world, all applications ran on the same hardware systems; they shared the hardware. But as hardware sellers, we were OK with that, because the more hardware we sold, the more money we made. There needed to be a way to share these systems, but what was missing for these new systems was generally available software to allow that to happen. Then along came VMware. VMware single-handedly created the virtualization business, where multiple “virtual” servers could be run on a single physical server, which allowed companies to run more than one application on a server. Aside from the obvious benefits, significant hardware cost savings were available, as companies wouldn’t need to add as many physical servers to run their new applications…they could share! At the time, VMWare provided perhaps the best improvements to IT infrastructure that IT management had seen to that point. VMware deserves all the credit in the world for creating the technology that the world uses to this day. But that was then, and this is now. Technology has come a long way in 25 years. VMWare came (and still comes) with a separate cost. Even with virtualized servers, companies continued to purchase far too many servers. Utilization of most servers was low, and the concept of virtual sprawl occurred. With virtualized servers, you needed a shared storage array for the servers to connect to. EMC, HDS, Network Appliance (since renamed NetApp), and others were the main vendors of shared storage arrays. To connect this shared storage array to the servers, you needed a network that was entirely segmented from the rest of the network traffic, which came to be called a Storage Area Network or SAN. Brocade and Cisco were the two big Fibre Channel (FC) switch companies that benefitted here, with Brocade holding an 80%+ share of the FC market. Imagine if your company held an 80% market share! Using virtualization, the concept of 3-tier infrastructure was born. While this may not be the exact technical definition, it’s commonly considered that 3-tier is compute (servers) and storage, networking, and virtualization. Each of those are discreet parts of an IT infrastructure with their own costs and dependencies. Recall we said that virtualization offered the chance to reduce hardware costs overall? Well, while it did initially, over time costs increased significantly. When a single vendor has an overwhelming share of the market (Brocade, EMC, HP, Dell, Sun, Oracle, Cisco, etc.) there’s no incentive for costs to come down fast. Why would they? VMware wasn’t free then, and it isn’t free now. Fair enough…their virtualization software (ESXi) has provided great value, but at a very high cost. And, there are other vendors that offer the function that VMware does at little to no cost. Here’s the crux of what Post #1 is all about…if you’re still running a 3-tier IT infrastructure, you’re overpaying. There it is…we said it. You’re paying more than you should. We’re guessing about 40% more than you should. Which, in these days, is a big amount. Heck, it’s a big amount at any time. We’ve never had anyone say to us, “Hey, guess what? I got promoted because I spent more of the company’s money than I needed to.” Can you imagine? No one likes getting overcharged in their personal life, so why do many IT executives treat it differently when it’s the company’s money? It makes no sense to us. Especially since it’s not about the IT infrastructure, remember? IT’S ABOUT THE APPLICATIONS. Next up…Post #2…Where are you Trying to Get to and Why?
1 Comment
|
AuthorTim Joyce, Founder, Roundstone Solutions Archives
July 2024
Categories
All
|