A look at the companies who can help keep your data safe
It’s going to happen eventually. The power will go out, a major storm will come through, or a ransomware attack will hold your data hostage. Whatever it is, it will take your business down if you’re not prepared.
Your options when that disaster strikes will depend greatly on the decisions you’ve made beforehand. Specifically, how you back up your data and with whom.
The world of data backups is crowded with different vendors. Most of them can do the job. Which one you choose will depend on your needs. To help you pick the right partner for your business, here are some of the most popular backup and recovery vendors and their use cases.
Why Back Up?
Put simply: You backup so that you can recover. But recover from what? That’s where times are changing.
What used to be called disaster recovery is now called business continuity. It’s the science of ensuring that your business will continue if the worst case scenario occurs. One of the most important aspects of business continuity is protecting your data. That’s where backup and recovery come in.
Why is backing up important? For example, let’s look at what happened on September 11, 2001.
The world remembers that two planes struck the twin towers of the World Trade Center in New York City, ending the lives of thousands of people. Far less tragic, but still worth remembering, is the fact that the towers also housed hundreds of businesses. Many ceased to exist after the attack. Not only was their physical infrastructure in the towers, but their data infrastructure was mostly housed in data centers also in the towers. The attack destroyed both their physical infrastructure and their data. Afterward, these businesses were effectively gone and unrecoverable.
A disaster on the scale of 9/11 doesn’t have to occur to pose a major threat to your business. Storms, power outages, and ransomware are all existential threats to your data. Backing up your data is how you ensure that, should that data be damaged, you can recover it.
Testing Is Key
A key part of any backup strategy is testing. Remember, the idea of backing up anything in the first place is that, at some point, you will need to recover it. So it’s important to test that process. It can be more complicated than you expect, and the last thing you need when something bad has happened to your data is to run into complications while trying to restore it.
Testing is also important because sometimes a backup will simply not work. Companies fail, outages happen, and sometimes things break. So you want to test your backups from time to time to ensure that the process works and that the data you need will be there when you need it.
In the past, the main driver of backup and recovery solutions was the threat of physical loss — a fire, flood, or terrorist attack. That was then. Now, the main threat to data is ransomware.
Nearly 500 million ransomware attacks were detected in 2022, and the projected cost of ransomware attacks is expected to reach $265 billion every year by 2031. If you’re not protecting your data, the next attack could be against you.
In a ransomware attack, someone is intentionally trying to move your data from where it lives (your system) to another location (their system) so that they can hold it hostage and demand payment. In the process, your data is likely to be deleted from your system entirely and potentially corrupted. Even if you were to pay the ransom, what you get back might be useless.
Malicious actors only have to get it right once to take your business for everything you have. You have to be secure every single time. Business continuity requires an enterprise solution provided by reputable backup and recovery vendors.
Backup and Recovery Vendors
Backup and recovery vendors come in roughly three flavors:
We’ll take a look at and give some examples of each.
Legacy vendors are the “old guard” of backup options. These are typically large companies that have made their reputations on hardware and have bolted on backup and recovery services after the fact. These vendors have huge install bases, but their businesses are built on legacy infrastructure using old technology.
Commvault is probably the largest example of this type of vendor. They offer comprehensive data backup and recovery for both physical locations and cloud servers, and they do it very well.
The advantage of using a legacy system provider like Commvault is familiarity. It’s always worked, your IT department understands it, and it’s reliable. The downside is they’re often expensive and lack the modern features of modern, hyperconverged solutions.
IBM and Dell are also very strong competitors in the legacy vendor space.
These vendors offer predominantly on-site backup solutions that are software only. This means they require access to hardware you already have installed or will install.
Veeam is likely the biggest name in this space, offering backup and recovery solutions to businesses of all sizes.
HYCU is also a major player in this space, especially with its Nutanix integration. HYCU positions itself as a Backup as a Service (BaaS) provider with hybrid cloud and multi-cloud services.
The major advantage to backup and recovery vendors like Veam and HYCU is cost. When you’re only buying software, you can save tremendous amounts of money. The challenge is you have to get the hardware from somewhere.
If you already have a large investment in hardware resources that are robust and reliable, a software-only vendor might make sense for your enterprise. But if you’re investing in a backup and recovery solution from scratch, an all-in-one provider like the ones below might make more sense.
Modern Software and Hardware Vendors
Cohesity backup is a modern backup and recovery solution that utilizes hyperconverged architecture and a “single plane of glass” management console, allowing users to manage the entire backup and recovery process from a single UI. Cohesity offers on-site hardware to store your backups and host the Cohesity software. It works with the public cloud and Software as a Service (SaaS) environments.
Rubrik is another reputable hyperconverged backup and recovery vendor.
These two modern backup and recovery vendors have essentially rearchitected the way backup and recovery works. Instead of starting with legacy systems and bolting on hyperconverged architecture and backup and recovery systems, they’ve built their backup and recovery systems from the ground up.
In the case of ransomware, Cohesity backup as a service also offers a service called “Fort Knox,” named after the gold repository in Kentucky. Fort Knox is a truly last-line-of-defense solution. It stores a copy of your data in a server located somewhere other than the main backup server in a location the user can’t access. That way, if an attacker gains access to your servers and backups, they will not have access to Fort Knox. It’s called an immutable backup, and it could mean the difference between saving your data and having to make a payout to attackers.
You Backup So That You Can Recover
Remember, you backup so that you can recover. Whatever the size of your business or the investment you have already made in IT infrastructure, there’s a backup and recovery vendor that’s right for you. For more information about backup and recovery vendors or to get a consultation on the right solution for your business, reach out today.
Successful migration ultimately comes down to time, effort, and cost — discover which strategy offers the best outcomes for your business
If you’ve already weighed the cost of moving to the cloud and are dead-set on migrating, there is no shortage of options. However, choosing the most effective cloud migration strategy is not a decision to be taken lightly, and making the wrong decision can lead to spending more money on infrastructure than it brings in.
During the opening months of the pandemic, many organizations needed to rapidly transform their infrastructure to continue remote operations. As a result, they wasted a lot of money because they simply didn’t have the time to do the research. Now that things have settled down, it’s worth it to take the time to investigate all the options. No one gets extra credit for overspending, so applying an intelligent, research-based approach to determining the most appropriate cloud migration strategy will help you minimize costs while maximizing operational efficiency in the cloud.
We’ll break down the four most common cloud migration strategies, provide the pros and cons of each, and give you our recommended approach. That way, you’ll have the information you need to make smarter, more informed decisions about the cloud-based future of your business.
4 Common Cloud Migration Strategies
Strategy #1: Lift and Shift
A lift and shift strategy is exactly what it sounds like: You are lifting the applications out of your current on-premises data center and shifting them to a public cloud data center.
Lift and shift migrations are fast and cost-efficient — at least at first. Because you’re just moving infrastructure to a new location, it will operate exactly as it did on your old data server. You’re not unlocking the real benefits cloud infrastructure provides, and if it ran poorly before, it will run poorly on the public cloud.
What’s worse is that now you’re effectively paying double for the same outcomes you had before. This is the fallacy of the public cloud — you’re moving applications over just to spend more money without receiving additional value.
Pros: Migration is quick and less expensive than other strategies.
Cons: You won’t see added benefits from the cloud without significant time investment, meaning you’re paying more for the same output.
Strategy #2: Refactoring
Refactoring is the process of taking your current setup and rebuilding it from scratch to take advantage of the unique benefits of the public cloud. It’s a software optimization process — your applications are effectively running on different hardware, so you’re reoptimizing the software to run in this new environment. Refactored public cloud infrastructure is typically far more efficient than those that undergo a lift and shift migration.
This sounds great on paper. The problem with refactoring is that doing it right takes an enormous amount of time to set up and a significant amount of effort to pull off. Many businesses use hundreds of applications to keep day-to-day operations running smoothly. IT departments must refactor each of these applications to maximize efficiency within the cloud. No team has the staff or finances available to complete a project of this scope in a reasonable amount of time, and many refactoring projects simply never get finished.
Pros: Allows infrastructure to unlock the full potential of public cloud infrastructure efficiencies.
Cons: Requires excessive time to achieve, and many refactoring projects are never completed.
Strategy #3: Lift and Shift on Bare Metal
A lift and shift approach on bare metal moves your infrastructure onto private compute environments rather than shared ones. Bare metal also provides direct access to the hardware. This allows for far greater control over configuration, potentially making your infrastructure faster and more efficient. Conceptually, it’s no different than paying for your own on-premises infrastructure — you’re merely paying a platform like AWS or Google Cloud for the privilege.
However, building cloud infrastructure on bare metal is far more expensive than running infrastructure on shared servers. On the public cloud, you’re sharing the platform’s infrastructure with other users, where both the cost savings and provider profits come from — choosing a private configuration increases costs significantly. And if you’re not refactoring your infrastructure to take full advantage of the additional compute benefits, you’re better off sticking with on-premises infrastructure. The additional benefits of bare metal won’t outweigh the cost.
Pros: Offers more granular control over leased hardware and is not shared with other public cloud customers.
Cons: More expensive than shared public cloud options.
Strategy #4: Hyperconverged Infrastructure (HCI) on Bare Metal
Hyperconverged infrastructure uses bare metal cloud hardware as its base but places a software layer between the hardware and your applications, allowing you to run your applications as if they were on-premises.
HCI allows your infrastructure to receive both the benefits of bare metal hardware configurations and application refactoring for a fully performance-optimized compute environment without the labor, knowledge, or time required to do so. The software layer essentially does the refactoring for you — so while you’re paying more for the bare metal hardware and software integration, you’re saving both money and time that would typically result from a lengthy migration process.
Pros: Offers significant improvements to infrastructure efficiency with little investment in time or labor.
Cons: Bare metal infrastructure and necessary software are more expensive than other strategies.
Which Cloud Migration Strategy Should You Choose?
Bare metal configuration options are more expensive than shared public cloud infrastructure. But if you can’t get your applications refactored quickly enough, shared infrastructure will increase your costs without adding any benefit. So, what cloud migration strategy should you choose?
If management requires that your business moves its operations to the cloud, we recommend an HCI on bare metal approach, utilizing Nutanix public cloud software — specifically, Nutanix NC2.
Nutanix NC2 provides the efficiency of refactoring for the public cloud without the time and financial expense required to actually refactor your infrastructure. You won’t need to change any configuration, so you can move applications back and forth between the cloud and on-premises infrastructure as required. Your team won’t need any additional knowledge beyond what you’ve already gained by operating on premises, and it won’t need additional staff to manage the extra load. Migration becomes a relative snap compared to other options, and you can quickly satisfy upper management’s requirements by moving operations into the cloud.
Nutanix also eliminates the need for separate servers, storage, and storage network components by putting everything in a single box. Because these components operate much closer to the core, performance will be higher than infrastructure configurations where these components are spread out.
And unlike public cloud costs — which arrive monthly and fluctuate based on unforeseen spikes — Nutanix is paid for up-front, and pricing is based on how many processing cores you need over a specific period.
Ultimately, all the decisions surrounding cloud migration come down to which option will allow you to sell more products more efficiently and with the optimal ratio between expenses and revenue. HCI on bare metal with Nutanix NC2 will help you achieve the added benefits of the public cloud without spending too much time, energy, or resources to get there.
Still Unsure? Let Us Help
If you’re looking for more information about these migration strategies or want to learn more about what a cloud migration with Nutanix NC2 looks like, we can help. At Roundstone, we pride ourselves in being able to help businesses build modern, more efficient IT infrastructure that makes sense for their unique use cases. We’ve spent years connecting organizations with the knowledge they need to make smarter decisions and vendor partners that can make those decisions a reality. Want to get started? Contact us today.
Tim Joyce, Founder, Roundstone Solutions