For several years, we've asked dozens of our Clients and Prospective Clients how they decide which IT infrastructure to run their workloads on. To date, we've not found any organization that can explain in a clear fashion how they make those decisions. We think that's just bad business.
We think IT infrastructure decisions should be made with both eyes open, with all of the relevant data. Sadly, most decisions are made with a destination already in mind, and that destination seems to be the Public Cloud, more than others. "That's what all the smart people are doing".
But to question this approach is to be viewed as the "bad guy", or "the person who doesn't understand". Trust me, I understand very well. I watch IT organizations regularly make questionable decisions that cost them millions more than needed. Public Cloud vendors are laughing all the way to the bank. How do I know? Just take a look at their gross margins. If your company had gross margins anywhere close to those of the Public Cloud you'd be so rich you wouldn't be working anymore.
But what do I know? I'm just a guy with decades of experience who has seen end users fall for vendor marketing time and time again. They continue to get burned, and never seem to learn.
Here's what Roundstone Solutions and I are all about: We want Clients who make intelligent decisions by using as much available information as possible. We want them to be able to defend the decisions they make because they were sound ones, not because "that's what everyone else was doing".
Don't trust marketing from vendors (especially Public Cloud vendors). Seriously, has Public Cloud saved you money? Have you ever scaled down? Has it been any easier managing another IT infrastructure along with the ones you're already managing? Be honest.
Stand apart from the rest. Do your homework. Learn about the various options. Of course, we're happy to help you with this. Contact us and let us help.
How hyperconverged infrastructure can save you time and money
Technology evolves at a rocketship pace. If your organization is not keeping up with technological changes, it might find itself left behind.
One example is the legacy three-tier architecture. Decades ago, three-tier was the next new thing. Today, the rocketship has moved on. Companies are finding that new technological solutions are a better fit for today’s hyper-connected, Internet-of-Things world.
Where is the rocketship landing? Hyperconverged infrastructure (HCI). Let’s look at some of the hyperconverged infrastructure benefits and how you can maximize the benefits of HCI to make sure you aren’t left behind the times.
Hyperconverged Infrastructure Benefits
At the end of the day, businesses don’t buy technology because they want to buy technology. No one is in the business of simply owning technology. Businesses buy technology for the value it provides. If your technology isn’t helping you with your core business, there’s no reason to have it. And the more it helps, the better.
When you look at the costs versus benefits of older technologies versus newer technologies, the newer technologies will almost always outperform on efficiency, cost, and performance. Considering that potential business benefit is the most important reason to invest in technology, why would you invest in something that will be slower, less efficient, and cost more? The value isn’t there.
The challenge has traditionally been that there wasn’t a better solution than legacy three-tier architecture. Now there is.
Hyperconvergence is about moving processes that had previously been handled manually or with multiple different parts from hardware solutions into software. The result is the ability to replicate those processes on relatively inexpensive hardware while simultaneously eliminating the need to be hands-on with running infrastructure.
One example is data. Previously, in an IT environment, it was necessary to manually move data from place to place in the storage tier in order to make more room or optimize storage. Today that’s viewed as woefully inefficient. That can be done with software, saving money and time, and that’s what hyperconvergence is all about.
1. Allows for the latest technology
Among the biggest hyperconverged infrastructure benefits is that HCI removes the barrier between your business and the latest technological advances. By combining and automating tasks, the architectural load is migrated from discrete pieces of hardware to a converged whole. And that whole is managed by software.
This means you no longer have to keep up with separate servers, storage, and computational assets. You no longer have to maintain or upgrade hardware. Those assets are now being maintained by your HCI provider, who will update and replace them as necessary.
This means you get the benefit of the latest advances in technology without having to continually upgrade your equipment. The result: Your business benefits from the efficiency of modern technology without your having to continually chase a moving technological target.
In the traditional three-tier architecture, there will be separate storage, servers, and computational architecture, combined with a visualization layer on the front end. And generally, all of these different pieces will be from different manufacturers, housed in different places, and maintained by different people. This can be a compatibility and maintenance nightmare.
HCI combines these pieces into one hyper-converged whole. Suddenly, compatibility ceases to be an issue, and maintenance is simplified. Updates can be applied instantly, no longer requiring significant downtime.
In the computing world, the closer your data is to the processor (CPU), the faster your computations will be. If you have your data located close to the CPU without having to go through a bunch of external devices, you will see a performance boost. HCI realizes this boost with architecture that centers data close to the computational center.
Our premier HCI partner, Nutanix, has an HCI infrastructure that was built specifically for this reason. Unlike some major providers who have bolted on HCI to their existing architectures, the Nutanix HCI solution is custom-built from the ground up to harness the benefits and savings of HCI.
4. Closes skill-demand gap
Traditional architectures require a high degree of specialized knowledge to understand and implement. One of the benefits of HCI is that by automating tasks, it takes the need for specialized knowledge out of the equation. The tasks that would have required specialized knowledge are now automated, leaving your highly skilled team members with time for work that adds more value to the business.
5. Staff reallocation
There has been some concern that HCI will lead to people losing their jobs. After all, if we’re automating processes that have traditionally been performed by IT workers, what are those workers going to do?
The answer is they are going to be available to do higher value work for the company. Work that can potentially increase innovation, and with it, profits.
HCI automates maintenance tasks, which have traditionally eaten up an outsized percentage of workers’ time. According to a Deloitte report, the average IT department spends more than half of its budget on maintenance tasks, and only 19% on innovation. HCI can upend this paradigm, freeing up IT staff from time spent on maintenance, allowing them to spend more time innovating and adding value to your business.
One of the biggest hyperconverged infrastructure benefits is scalability. With HCI, you can scale almost infinitely without having to worry about outgrowing your server stack or storage solution. With HCI, you can scale up or down as your business needs change. There is no longer any need to “buy ahead” of need.
7. Cost savings
Finally, perhaps the most important benefit of HCI is cost savings. Traditional architectures require massive server farms, storage solutions, and separate computational resources. With HCI, all of that is converged, considerably reducing the initial outlay.
One of the biggest problems with traditional IT architecture is that, as your business grows, your technology becomes more complex, shifting focus from business problems to technology problems. Your business should be focused on how technology will drive value, not what your business value can do for your technology. To learn more about HCI benefits and how HCI can help your business scale and grow, contact us today.
A look at the companies who can help keep your data safe
It’s going to happen eventually. The power will go out, a major storm will come through, or a ransomware attack will hold your data hostage. Whatever it is, it will take your business down if you’re not prepared.
Your options when that disaster strikes will depend greatly on the decisions you’ve made beforehand. Specifically, how you back up your data and with whom.
The world of data backups is crowded with different vendors. Most of them can do the job. Which one you choose will depend on your needs. To help you pick the right partner for your business, here are some of the most popular backup and recovery vendors and their use cases.
Why Back Up?
Put simply: You backup so that you can recover. But recover from what? That’s where times are changing.
What used to be called disaster recovery is now called business continuity. It’s the science of ensuring that your business will continue if the worst case scenario occurs. One of the most important aspects of business continuity is protecting your data. That’s where backup and recovery come in.
Why is backing up important? For example, let’s look at what happened on September 11, 2001.
The world remembers that two planes struck the twin towers of the World Trade Center in New York City, ending the lives of thousands of people. Far less tragic, but still worth remembering, is the fact that the towers also housed hundreds of businesses. Many ceased to exist after the attack. Not only was their physical infrastructure in the towers, but their data infrastructure was mostly housed in data centers also in the towers. The attack destroyed both their physical infrastructure and their data. Afterward, these businesses were effectively gone and unrecoverable.
A disaster on the scale of 9/11 doesn’t have to occur to pose a major threat to your business. Storms, power outages, and ransomware are all existential threats to your data. Backing up your data is how you ensure that, should that data be damaged, you can recover it.
Testing Is Key
A key part of any backup strategy is testing. Remember, the idea of backing up anything in the first place is that, at some point, you will need to recover it. So it’s important to test that process. It can be more complicated than you expect, and the last thing you need when something bad has happened to your data is to run into complications while trying to restore it.
Testing is also important because sometimes a backup will simply not work. Companies fail, outages happen, and sometimes things break. So you want to test your backups from time to time to ensure that the process works and that the data you need will be there when you need it.
In the past, the main driver of backup and recovery solutions was the threat of physical loss — a fire, flood, or terrorist attack. That was then. Now, the main threat to data is ransomware.
Nearly 500 million ransomware attacks were detected in 2022, and the projected cost of ransomware attacks is expected to reach $265 billion every year by 2031. If you’re not protecting your data, the next attack could be against you.
In a ransomware attack, someone is intentionally trying to move your data from where it lives (your system) to another location (their system) so that they can hold it hostage and demand payment. In the process, your data is likely to be deleted from your system entirely and potentially corrupted. Even if you were to pay the ransom, what you get back might be useless.
Malicious actors only have to get it right once to take your business for everything you have. You have to be secure every single time. Business continuity requires an enterprise solution provided by reputable backup and recovery vendors.
Backup and Recovery Vendors
Backup and recovery vendors come in roughly three flavors:
We’ll take a look at and give some examples of each.
Legacy vendors are the “old guard” of backup options. These are typically large companies that have made their reputations on hardware and have bolted on backup and recovery services after the fact. These vendors have huge install bases, but their businesses are built on legacy infrastructure using old technology.
Commvault is probably the largest example of this type of vendor. They offer comprehensive data backup and recovery for both physical locations and cloud servers, and they do it very well.
The advantage of using a legacy system provider like Commvault is familiarity. It’s always worked, your IT department understands it, and it’s reliable. The downside is they’re often expensive and lack the modern features of modern, hyperconverged solutions.
IBM and Dell are also very strong competitors in the legacy vendor space.
These vendors offer predominantly on-site backup solutions that are software only. This means they require access to hardware you already have installed or will install.
Veeam is likely the biggest name in this space, offering backup and recovery solutions to businesses of all sizes.
HYCU is also a major player in this space, especially with its Nutanix integration. HYCU positions itself as a Backup as a Service (BaaS) provider with hybrid cloud and multi-cloud services.
The major advantage to backup and recovery vendors like Veam and HYCU is cost. When you’re only buying software, you can save tremendous amounts of money. The challenge is you have to get the hardware from somewhere.
If you already have a large investment in hardware resources that are robust and reliable, a software-only vendor might make sense for your enterprise. But if you’re investing in a backup and recovery solution from scratch, an all-in-one provider like the ones below might make more sense.
Modern Software and Hardware Vendors
Cohesity backup is a modern backup and recovery solution that utilizes hyperconverged architecture and a “single plane of glass” management console, allowing users to manage the entire backup and recovery process from a single UI. Cohesity offers on-site hardware to store your backups and host the Cohesity software. It works with the public cloud and Software as a Service (SaaS) environments.
Rubrik is another reputable hyperconverged backup and recovery vendor.
These two modern backup and recovery vendors have essentially rearchitected the way backup and recovery works. Instead of starting with legacy systems and bolting on hyperconverged architecture and backup and recovery systems, they’ve built their backup and recovery systems from the ground up.
In the case of ransomware, Cohesity backup as a service also offers a service called “Fort Knox,” named after the gold repository in Kentucky. Fort Knox is a truly last-line-of-defense solution. It stores a copy of your data in a server located somewhere other than the main backup server in a location the user can’t access. That way, if an attacker gains access to your servers and backups, they will not have access to Fort Knox. It’s called an immutable backup, and it could mean the difference between saving your data and having to make a payout to attackers.
Backup and Recovery Vendors
Remember, you backup so that you can recover. Whatever the size of your business or the investment you have already made in IT infrastructure, there’s a backup and recovery vendor that’s right for you. For more information about backup and recovery vendors or to get a consultation on the right solution for your business, reach out today.
Successful migration ultimately comes down to time, effort, and cost — discover which strategy offers the best outcomes for your business
If you’ve already weighed the cost of moving to the cloud and are dead-set on migrating, there is no shortage of options. However, choosing the most effective cloud migration strategy is not a decision to be taken lightly, and making the wrong decision can lead to spending more money on infrastructure than it brings in.
During the opening months of the pandemic, many organizations needed to rapidly transform their infrastructure to continue remote operations. As a result, they wasted a lot of money because they simply didn’t have the time to do the research. Now that things have settled down, it’s worth it to take the time to investigate all the options. No one gets extra credit for overspending, so applying an intelligent, research-based approach to determining the most appropriate cloud migration strategy will help you minimize costs while maximizing operational efficiency in the cloud.
We’ll break down the four most common cloud migration strategies, provide the pros and cons of each, and give you our recommended approach. That way, you’ll have the information you need to make smarter, more informed decisions about the cloud-based future of your business.
4 Common Cloud Migration Strategies
Strategy #1: Lift and Shift
A lift and shift strategy is exactly what it sounds like: You are lifting the applications out of your current on-premises data center and shifting them to a public cloud data center.
Lift and shift migrations are fast and cost-efficient — at least at first. Because you’re just moving infrastructure to a new location, it will operate exactly as it did on your old data server. You’re not unlocking the real benefits cloud infrastructure provides, and if it ran poorly before, it will run poorly on the public cloud.
What’s worse is that now you’re effectively paying double for the same outcomes you had before. This is the fallacy of the public cloud — you’re moving applications over just to spend more money without receiving additional value.
Pros: Migration is quick and less expensive than other strategies.
Cons: You won’t see added benefits from the cloud without significant time investment, meaning you’re paying more for the same output.
Strategy #2: Refactoring
Refactoring is the process of taking your current setup and rebuilding it from scratch to take advantage of the unique benefits of the public cloud. It’s a software optimization process — your applications are effectively running on different hardware, so you’re reoptimizing the software to run in this new environment. Refactored public cloud infrastructure is typically far more efficient than those that undergo a lift and shift migration.
This sounds great on paper. The problem with refactoring is that doing it right takes an enormous amount of time to set up and a significant amount of effort to pull off. Many businesses use hundreds of applications to keep day-to-day operations running smoothly. IT departments must refactor each of these applications to maximize efficiency within the cloud. No team has the staff or finances available to complete a project of this scope in a reasonable amount of time, and many refactoring projects simply never get finished.
Pros: Allows infrastructure to unlock the full potential of public cloud infrastructure efficiencies.
Cons: Requires excessive time to achieve, and many refactoring projects are never completed.
Strategy #3: Lift and Shift on Bare Metal
A lift and shift approach on bare metal moves your infrastructure onto private compute environments rather than shared ones. Bare metal also provides direct access to the hardware. This allows for far greater control over configuration, potentially making your infrastructure faster and more efficient. Conceptually, it’s no different than paying for your own on-premises infrastructure — you’re merely paying a platform like AWS or Google Cloud for the privilege.
However, building cloud infrastructure on bare metal is far more expensive than running infrastructure on shared servers. On the public cloud, you’re sharing the platform’s infrastructure with other users, where both the cost savings and provider profits come from — choosing a private configuration increases costs significantly. And if you’re not refactoring your infrastructure to take full advantage of the additional compute benefits, you’re better off sticking with on-premises infrastructure. The additional benefits of bare metal won’t outweigh the cost.
Pros: Offers more granular control over leased hardware and is not shared with other public cloud customers.
Cons: More expensive than shared public cloud options.
Strategy #4: Hyperconverged Infrastructure (HCI) on Bare Metal
Hyperconverged infrastructure uses bare metal cloud hardware as its base but places a software layer between the hardware and your applications, allowing you to run your applications as if they were on-premises.
HCI allows your infrastructure to receive both the benefits of bare metal hardware configurations and application refactoring for a fully performance-optimized compute environment without the labor, knowledge, or time required to do so. The software layer essentially does the refactoring for you — so while you’re paying more for the bare metal hardware and software integration, you’re saving both money and time that would typically result from a lengthy migration process.
Pros: Offers significant improvements to infrastructure efficiency with little investment in time or labor.
Cons: Bare metal infrastructure and necessary software are more expensive than other strategies.
Which Cloud Migration Strategy Should You Choose?
Bare metal configuration options are more expensive than shared public cloud infrastructure. But if you can’t get your applications refactored quickly enough, shared infrastructure will increase your costs without adding any benefit. So, what cloud migration strategy should you choose?
If management requires that your business moves its operations to the cloud, we recommend an HCI on bare metal approach, utilizing Nutanix public cloud software — specifically, Nutanix NC2.
Nutanix NC2 provides the efficiency of refactoring for the public cloud without the time and financial expense required to actually refactor your infrastructure. You won’t need to change any configuration, so you can move applications back and forth between the cloud and on-premises infrastructure as required. Your team won’t need any additional knowledge beyond what you’ve already gained by operating on premises, and it won’t need additional staff to manage the extra load. Migration becomes a relative snap compared to other options, and you can quickly satisfy upper management’s requirements by moving operations into the cloud.
Nutanix also eliminates the need for separate servers, storage, and storage network components by putting everything in a single box. Because these components operate much closer to the core, performance will be higher than infrastructure configurations where these components are spread out.
And unlike public cloud costs — which arrive monthly and fluctuate based on unforeseen spikes — Nutanix is paid for up-front, and pricing is based on how many processing cores you need over a specific period.
Ultimately, all the decisions surrounding cloud migration come down to which option will allow you to sell more products more efficiently and with the optimal ratio between expenses and revenue. HCI on bare metal with Nutanix NC2 will help you achieve the added benefits of the public cloud without spending too much time, energy, or resources to get there.
Still Unsure? Let Us Help
If you’re looking for more information about these migration strategies or want to learn more about what a cloud migration with Nutanix NC2 looks like, we can help. At Roundstone, we pride ourselves in being able to help businesses build modern, more efficient IT infrastructure that makes sense for their unique use cases. We’ve spent years connecting organizations with the knowledge they need to make smarter decisions and vendor partners that can make those decisions a reality. Want to get started? Contact us today.
Tim Joyce, Founder, Roundstone Solutions