Is the public cloud really your best option? Before following the crowd, do your homework.
There are thousands of reasons why entrepreneurs start their own businesses. Ultimately, though, those myriad motivators boil down to one thing: the desire to create something of value. Your definition of “value” will largely depend on your company’s objective; it might be marketability, sustainability, innovation, or — in many cases — profitability. Once you’ve established your business goal, everything you do should be in service to that goal, which takes money, time, and people.
This all sounds relatively obvious on paper, but in practice, many companies aren’t evaluating whether every business decision they make serves their ultimate goal. Take IT spending, for instance. For many, the IT department is seen as a cost center, a necessary expense to keep the business running. It’s the same for the public cloud — everyone else is using it, so it must be part of the cost of doing business, right? But what is actually the business value of cloud computing? If you haven’t asked yourself that question before, it’s time to take a step back and reassess.
Do You Really Know the Value of Cloud Computing?
Any business trying to compete in the modern marketplace needs a comprehensive IT infrastructure in place. When the public cloud is viewed as just another necessary cost of this infrastructure, executives and IT professionals often neglect to reassess its value, even as the company grows and evolves. In some cases, otherwise savvy businesses are wasting up to 50% of their IT infrastructure budget by treating these costs as a foregone conclusion.
There are three factors that contribute to this mentality: the ease of maintaining the status quo, an aversion to risk, and a herd mentality that’s all too common in the technology industry.
Maintaining the Status Quo
It’s not hard to see why IT professionals become comfortable with the status quo. There’s business value in saving the time, money, and personnel it would take to re-evaluate all of your current solutions. At some point, however, continuing along the same path means you’re probably leaving money on the table. Don’t over-focus on short-term results.
But what if you spend the time and effort to come up with an alternate solution and it doesn’t work? What if it adds unplanned expenses or downtime? Those are valid concerns, but they can build up into a risk-averse attitude that simply doesn’t work in cutting-edge industries. Eventually, one of your competitors is going to take those risks, leaving you behind in the process.
Following the Herd
The technology industry has a long history of herd mentality. In the past, we saw this with outsourced data centers, cryptocurrency and NFTs, the media’s “pivot to video” — none of which were proven to be sustainable over time. The latest shiny distractions are AI and cloud computing, both of which have very promising technological applications. The problem is, many companies aren’t fully evaluating those applications or the value they bring; instead, decision-makers assume that everyone who came before them has already done their due diligence, and it’s safe to walk the same path.
In the case of the public cloud, this can be an expensive assumption. Companies often turn to the public cloud to get up and running quickly; others see this immediate success and follow suit without properly doing their homework. The best solution for another company doesn’t mean it will suit your particular needs. In some cases, public cloud computing might legitimately be the best option, but how can you really know that if you haven’t evaluated its value?
Embracing Change as the Only Constant
Think about a startup with a handful of employees. It has to do a lot with very little, so many of the functions of an enterprise business — payroll, HR, and so on — are likely getting outsourced to cloud-based software solutions. In those early months and sometimes years, that makes perfect sense, but what happens when your company matures? Is it still worth it to pay those recurring infrastructure costs, or is there a better way?
To avoid getting stuck in the “If it ain’t broke, don’t fix it” trap, you need to accept the fact that change is inevitable. Whenever your business grows, you need to take the time to evaluate the solutions you’re using and consider whether there are better alternatives. Is the value of cloud computing the same now as it was a year ago, three years ago, five years ago? If you’re not sure of the answer, it’s time for an audit. Do your research, weigh your choices, and only after you’ve established which option provides the most value, make a choice — but understand that choice isn’t permanent. In another three to five years, it might be time for another audit. That doesn’t mean you made the wrong choice; it just means that change has come once again. Don’t fear it. Embrace it.
Roundstone Gets Business Value
At Roundstone, evaluating the available solutions and making the best choice for a particular set of circumstances is our specialty. Whether you’re struggling to find the right cloud strategy, attempting to modernize existing architecture, or hoping to find ways to stretch your resources further, Roundstone Solutions can help. We work with best-in-class vendors and create efficient solutions designed with your needs in mind. To learn more, get in touch.
Companies and governments are investing heavily to build AI, but many don’t know what they’re buying — or how to use it
The sky is falling for anyone not getting into artificial intelligence (AI) immediately. If you build AI for your business, you’re going to revolutionize the industry by saving huge amounts of time, boosting efficiency, and raising productivity, and it’s only going to get better over time. Just look at how good ChatGPT is now: You give it a prompt, and it spits out 700 words like it’s nothing. Evangelists spread this gospel from every corner. Businesses and governments alike see dollar signs.
There’s just one problem: Many of the leaders at these organizations don’t actually understand AI. The hype around the technology has them exploring how to build AI rather than how to use it in practical ways. By failing to cover their fundamentals, these leaders risk burning time and cash on AI investments that don’t pan out. So, in the interest of saving both, let’s take a deep breath and a deeper look at what AI is and what it can do for us.
What Is AI?
Underneath the shiny, marketing-buffed exterior, AI is an application like anything else. That means it takes servers, data, networking, and software to build AI successfully.
Let’s take ChatGPT as an example. This bot works by crawling the internet to index the contents — just as Google Search does — and then store huge chunks of it on developer OpenAI’s servers. The bot then examines all that language to learn syntax, usage, and facts that help it mimic human writing. When you prompt ChatGPT, it accesses those servers, chews through a bunch of data, and returns it to you as a big block of text.
The classic components of an application are all there. Servers host ChatGPT’s data, networks connect those servers, and software pulls it all together into text for users to read and use. That makes it fundamentally the same as your human resources or payroll apps. The only difference is in its workload.
How to Build AI Infrastructure
At this juncture, business leaders need to look past the hype around AI and treat it like any other application. The apps may do some of the work for you, but if you plan to integrate an AI solution, you will need to invest in infrastructure.
Outsourcing vs. Going In-House
Whether it’s storage, cybersecurity, or AI, implementing a new solution means examining whether or not to outsource its infrastructure. In each case, a business has to answer for itself by evaluating the expected return on investment of either option. When it comes to storage and cybersecurity, we have an ocean of data to pore through about how to provision infrastructure. We have a strong sense of what benefits and drawbacks we can expect from different deployment methods. That leads to smarter business decisions and more efficient operations.
AI is a different story. Its use cases in business are still nascent. As a result, we simply don’t have the data we need to determine whether it’s more cost-effective to build AI infrastructure in-house or to rent it from tech giants like Google. Moreover, it will be a long time before we have that data. And there’s going to be a lot of money burned between now and then.
The Case for Letting Others Lead
Despite the enthusiasm for impressive AI deployments like ChatGPT, it’s still not clear exactly how this technology will unlock the efficiency gains executives are looking for. That won’t stop them from looking. Lots of people have lots of ideas about how AI can save time and money, but they know very little for sure. Leaders in the space will likely spend billions of dollars and make tons of mistakes before they perfect the formula for AI deployment. For giant corporations, that kind of investment may be worthwhile. But for most businesses, it will prove much more cost-effective to adopt a wait-and-see approach.
Imagine a banking giant like Wells Fargo, for example. It could spend $1 billion on AI just to see $100 million of value. That return may grow over time, but the initial costs would be ruinous for a smaller company. It’s far more efficient for that smaller company to wait until the use cases have become clear and begun to demonstrate value. That way, they can invest less upfront and still see similar returns.
Think of AI like a bridge under construction via trial and error. Deep-pocketed interests can afford to send their goods across the bridge over and over despite the risk of a bridge collapse. If a collapse comes, they can eat the cost of lost goods and then use what they learned to make the bridge a little better. Over a long enough timeline — and with sufficient lost goods — they’ll come up with a great bridge. But in the meantime, it makes more sense for the rest of us to head downstream to the ford. It may take a little longer, but it’s significantly safer. Then, once the infrastructure is in place, we can swoop in and benefit from their investments at a lower cost.
How to Plan for the Future
Even if you’re not investing in AI immediately, there are still ways you can prepare for its widespread adoption. The most important thing you can do right now is consider what, specifically, you want AI to do for your business. What are the processes that seem ripe for efficiency gains? Where are the simple, repetitive tasks that could be handled by AI? Create a plan for how you might integrate AI technology. In each case, be sure to clearly outline how it will deliver increased business value. By doing so, you can position yourself as the smart money coming to capitalize on AI.
Build Efficiency Through Modernization
For many businesses, AI’s promise lies in its ability to increase efficiency. Budgets are stretched thin, and organizations of all sizes are looking to do more with less. While you wait for AI to realize that promise, there are plenty of ways to up efficiency. From Hybrid Cloud Infrastructure to Unified Communications as a Service, Roundstone Solutions can put you in touch with best-in-class vendors ready to get you the biggest return on every dollar spent. Contact us today to learn more.
From the well-known to the highly rated, these are the partners you need to know about
As many are discovering, the time to migrate to hyperconverged infrastructure (HCI) is now.
I’ve written before about the benefits of HCI. Put simply, no one is in the business of owning technology for the sake of owning technology. If you’re going to own tech, it should be beneficial to your business. HCI benefits businesses by offering an efficient entry to the latest infrastructure tech that will save you money.
Don’t just take my word for it. Public cloud solutions are taking up a lot of bandwidth lately, with many businesses following the herd to the cloud without giving a lot of thought to why or what other solutions might be available. But here’s a secret: public cloud providers and other hyperscalers use HCI. In fact, one of our top HCI vendors was created by people who built the infrastructure for one of the largest current public cloud providers.
HCI offers an evolution of traditional server architecture by automating and moving tasks to software. It provides the simplicity of a one-touch experience in a scalable solution for any enterprise user.
Here are the top 5 hyperconverged infrastructure vendors I think you should know about.
Top 5 HCI Vendors
There are a variety of HCI vendors out there. Most of them are good. Some are great. Which HCI vendor is right for you can depend on your needs and budget.
These are the HCI vendors that would be a good fit for almost any enterprise or small business. Starting at the top, with the HCI vendor I recommend more than any other.
Nutanix was founded in 2009 by engineers who worked at Google creating the Google file system, including a man known as the “Father of Hyperconvergence,” Mohit Aron. Google was one of the earliest pioneers of HCI. The company, in its goal to create a scalable architecture, moved many processes into automation through software. This helped streamline operations and allowed for faster data access.
Aron was on the team that created the file system for managing all of that data, which Google still uses to this day. Believing more companies could benefit from HCI technology, Aron left Google and co-founded Nutanix. The company achieved “unicorn” status in 2013 as a startup with a valuation of over $1 billion and went public in 2016. Today, the company is worth over $8 billion.
Nutanix was the first in the HCI space and is still one of the best. It created a simple-to-use service with the power of a file system created for a hyper-scaler.
To me, Nutanix is the industry’s best-kept secret. A lot of people haven’t heard of it, but those who know, know.
Net Promoter Score (NPS) is a survey that asks responders a single question: how likely are you to recommend this product or service? Respondents answer with a number from 0-10. Numbers 0-6 are considered “detractors,” 7 and 8 are considered “passives,” and 9 and 10 are considered “promoters.” The percentage of detractors is then subtracted from the percentage of promoters, and the result is a number between -100 and 100, which is a company’s NPS.
Nutanix’s NPS is 90+ and has been for six years. That is unheard of. Compare that to Amazon Web Services (AWS) at 59, Google Cloud Platform at 45, and Microsoft at 40. People who use Nutanix love Nutanix.
SimpliVity was also founded in 2009 and was acquired by HPE in 2017.
HPE has since bolted SimpliVity onto its offerings as its native HCI solution. Unfortunately, SimpliVity has always had hardware dependencies. It is not a full software solution like Nutanix. And, now that it is a part of HPE, it is wholly dependent on HPE equipment. So if you buy SimpliVity, you’re buying HPE equipment.
The other side of that coin is that it makes HPE a one-stop shop. You can actually buy hardware from them to go with your HCI solution.
Dell’s solution is novel in that it didn’t purchase a separate HCI company, it created its own out of pre-existing parts. But unlike Nutanix, the Dell VxRail solution is not built from the ground up for HCI, it is cobbled together out of existing Dell infrastructure solutions. The result, while it functions adequately, is not as elegant or efficient a solution as Nutanix.
VxRail, like Simplivity, also eliminates choice. You’ll be running Vmware whether you want Vmware or not. You will also be tethered to Dell hardware. The upside of these dependencies is they take the guesswork out of choosing virtualization and hardware. Plus, as with SimpliVity, you can at least buy hardware from Dell.
Hyperflex was created by Cisco in partnership with Springpath in 2016. Cisco then acquired Springpath in 2017 for $320 million.
The main benefit of Hyperflex was the Cisco brand name, but given that Cisco is mainly a networking company and has no stake in the storage market, its value as an HCI solution is questionable. Hyperflex was mainly used by Cisco to seed its server business with a proprietary HCI solution, but in August of 2023, Cisco announced a partnership with Nutanix to offer a wholly new HCI solution for Cisco server equipment. Hyperflex has since been discontinued.
Scale Computing is the smallest of the top 5 HCI vendors. Scale was founded in 2008 and launched its HCI solution in 2012.
Scale is a solid solution for SMBs, but it shouldn’t be considered enterprise class. If you have small workloads and a solution like Nutanix is too expensive, Scale is a solid choice. Otherwise, I’d stick with one of the other vendors.
Migrating to HCI doesn’t have to be complicated. Roundstone can walk you through the process and talk you through your options to find the solution that’s best for you. Whether that’s one of the five HCI vendors listed here or something entirely different. To get started with your HCI migration and to learn more about how Roundstone can help you with your technology needs, contact us today.
By now, you've probably read about Cisco partnering up with Nutanix, after Cisco "threw in the towel" on HyperFlex. The purpose of this post is to give you some comfort in your way forward.
First, some history. Cisco started selling servers in 2009. This was a strategic move by Cisco to hedge against competition in the networking space, where, at the time, Cisco held an 80+% market share. The idea was that since Cisco was in so many corporate data centers on the networking side, there was value to being able to also supply compute, in the form of their UCS servers. Made sense, although there was never a storage component to that idea, which I never understood.
In 2016, Cisco was seeing that companies like Nutanix and a few others were doing well selling hyper-converged infrastructure (HCI). So, to stem eventual competition for their UCS servers, Cisco purchased a company called Springpath, which was a struggling company offering HCI. Springpath was not a main player in the HCI space at the time.
Cisco combined their UCS servers and Springpath software to create their version of HCI, which they named HyperFlex. HyperFlex started selling in 2016, I think. Or, should I say, Cisco started giving away a lot of HyperFlex appliances with networking deals, in order to seed the market.
You see, HyperFlex was never a big hit with users. Of the HyperFlex market share (very small), many of the users got the product free as a part of a networking deal, and didn't really pay for HyperFlex itself. I know this because I saw many HyperFlex shipping boxes sitting at customer sites, still with the appliances inside. I would ask if they purchased HyperFlex, and most times, I learned they were given the products. That was my experience...it may not have been yours.
Please note that I'm not saying HyperFlex didn't work. It does, but in a different way than Nutanix. HyperFlex consists of hardware acceleration to make UCS perform with HCI software, whereas Nutanix is all about moving function into software.
Well, after trying to make a go of it with HyperFlex for 7 years, Cisco has finally given up on the platform and announced the end of life for HyperFlex. Cisco has partnered up with Nutanix, which is the industry leader in HCI and has been since the start. Which makes sense. Why sell a product that customers weren't interested in buying when you could offer them a solution that customers love (Nutanix)?
So, now that Cisco has given you the word that there isn't a future for HyperFlex, what's your move? Well, let us help.
Roundstone Solutions is one of Nutanix's primary focused partners in the Northern CA and NY/NJ markets. We specialize in Nutanix; it's literally 80% of what we do. We know the platform as well as the folks at Nutanix, and know how to help our Clients get the most from Nutanix.
Let us help you. Call us at 925-324-1582 in Northern CA or 201-740-2190 in NY/NJ. Or, email us at email@example.com. We'll get right back to you and will be more than happy to help you chart a course towards Nutanix.
Your future is bright with Nutanix.
You’ll only know you’ve under-invested in cybersecurity solutions when something goes wrong
Of the thousands of decisions you make at your company, choosing the right cybersecurity solutions may be the most important. Just ask the folks at Clorox. Ransomware hackers hit the company in August, and paying the ransom was only the beginning of the trouble. It spent $25 million on response, from forensic investigators to legal and technical assistance. Then, in October, it announced that the disruption caused by the attack had led to a 23-28% loss in net sales. And that’s all to say nothing of the reputational damage. Nobody wants to do business with a company that has Swiss cheese for security.
As ransomware attacks become more and more common, choosing the right cybersecurity solutions for business only grows more important. By examining the forces shaping the cybersecurity market, you’ll be better equipped to find the right solution for your company.
Factors Affecting the Search for Cybersecurity Solutions for Business
Stakes Have Never Been Higher
As technology has grown more advanced, companies have started holding more and more of their resources within that technology. That’s especially true for cloud service providers such as Google and Amazon. Proprietary product designs, protected client information, banking information, and more now reside in virtual data centers. All that data in one place has encouraged bad actors to scale up their hacking efforts accordingly. There were more than 600 million ransomware attacks in 2021, and there were 140 million in the first half of 2023. Hackers and tech companies are now locked in an arms race, with trillions of dollars on the line.
Budgeting Questions Are Complex
It would make life a lot easier if cybersecurity solutions for business could be budgeted in the same way operational IT solutions are. In IT, a company can evaluate workload size and speed, and then estimate a budget based on that data.
Estimating a budget in cybersecurity is much more opaque because there are few signs that your solutions are working. You may never know how many attacks your security repels. On the other hand, as soon as you’ve under-invested, you’ll know. And by then it will be too late. Security solutions have to be right every time, but bad actors only need to be right once. That’s why IT managers tend to over-invest in cybersecurity: Better safe than sorry.
Security Talent Is Scarce and Expensive
Budgeting also requires more than investing in the right tools. Operating a cybersecurity staff of sufficient size is another critical piece of the puzzle. But this brings up another complication. Cybersecurity demands a lot of talent. That talent is in limited supply, and hiring competition is stiff, to say the least. Whatever your company can pay, Microsoft and Facebook can probably pay more. That makes it very difficult to attract the best talent to your business.
After the giants have taken their picks of the cybersecurity talent, the remaining professionals will still expect high salaries. And even if they’re within your budget, supply is so constrained that you may not be able to hire enough of them to meet your needs. There simply aren’t enough cybersecurity experts to go around.
Security Operation Centers (SOCs) and You
Major corporations know that the bigger they are, the bigger the targets on their backs. The slightest misstep could let bad actors past their defenses, leading to eye-watering value losses. To combat that threat, they create departments whose sole focus is maintaining cybersecurity. These departments are known as security operations centers, or SOCs. They’re full of cybersecurity experts equipped with top-of-the-line tools who serve as the eyes and ears of the organization, taking in telemetry information, assimilating it, identifying trends and upcoming threats, and thus staying one step ahead of bad actors.
Mid-size and smaller corporations are in a difficult position with regard to cybersecurity. They may not face the same volume of attacks as an Apple or a Walmart, but they also have far fewer resources to fend off hackers. They can’t afford to dedicate an entire department to security. Instead, those responsibilities fall to the IT department. Those workers are likely capable in cybersecurity matters, but their plates are already full of other priorities, including maintaining continuity of services. If they fail on that front because they prioritized security, we run into the spending black box problem we discussed earlier. There’s no way to know if that priority on security was misplaced.
(Shared) Knowledge Is Power
Every device on your company’s network is a potential entry point for bad actors. That includes servers, network switches, storage devices, and even some connected printers. That’s why so many cybersecurity solutions for business focus on creating strong locks on those access points. But the truth is that, despite what movies might have you believe, 95% of cybersecurity issues are the result of simple human error. Phishing attempts, fraud, and other manipulative tricks are the most common ways bad actors get into company systems. As a result, a truly comprehensive security strategy should find ways to address and guard against those techniques.
This is where sharing knowledge gains critical importance. In an every-man-for-himself environment, each business has an incredibly limited amount of information available to it. If you’re lucky enough to have 20 staff working on cybersecurity, that’s all the intelligence you can rely on to stay ahead of hackers. But if those 20 workers can share data with another 20, let alone 200, their frame of reference for possible vulnerabilities grows enormously. And the more vulnerabilities they know about, the more they can plug up.
Making SOCs Accessible with Outsourcing
Companies hit by cyberattacks are often understandably cautious about discussing how their security systems failed. But if they’re able to overcome that reluctance and contribute to the knowledge of the cybersecurity community, it can build a critical mass of information about how hackers target and attempt to infiltrate business systems.
Cybersecurity firm Arctic Wolf aims to provide that wider frame of reference. Its cybersecurity experts essentially function as an outsourced SOC. They hook into your security tools and fine-tune them for maximum security. They then monitor those tools in addition to trends in hacking attempts, find new security solutions, and implement them for you. To find out how Arctic Wolf can help your business, contact us today.
With cloud costs higher than expected, many businesses are looking for ways to bring expenses — and data — back under their control
You’ve done your research and realized that the public cloud isn’t the right decision for your business. Maybe you’ve determined that the cost of keeping your workloads in the cloud is too high, but you remain hesitant that cloud repatriation is right for you.
You’re not alone.
The public cloud is a powerful tool for the right workloads — but it’s not the right choice for all of them, and the costs can be tremendous, especially when compared to running the same workloads on premises. However, a level of groupthink often influences decisions when it comes to staying in the cloud, where businesses decide to keep paying these high costs just because every other company around them is.
But here’s the thing: if you’re already researching cloud repatriation, your intuition tells you it’s the right move. And by doing that research, you’ve already put in the work it would take to go through the process.
Plus, I guarantee that more businesses than you think are having these same internal conversations about shifting from the public cloud back to on premises to save money. In fact, some surveys show that up to 80% of companies are repatriating at least some of their data back to on premises infrastructure each year.
Cloud repatriation isn’t a heavy lift, so why wait? Here, we’ll cover the basics of cloud repatriation, some reasons why CTOs hesitate to make the move, and why repatriation is a far simpler decision than most make it out to be. That way, you’ll have all the knowledge you need to make the right decision for your organization’s bottom line.
What Is Cloud Repatriation?
Cloud repatriation is the process of shifting applications from public cloud environments back to on premises infrastructure. It’s essentially a cloud migration in reverse — you’re taking the workloads you previously moved to the cloud and bringing them back into your complete control.
Why Do Organizations Seek Out Cloud Repatriation?
There are several reasons why organizations pursue a cloud repatriation. For many, it comes from a realization that migrating their workloads to the cloud has failed to achieve any of the benefits cloud providers promised. Typically, the reasons behind cloud repatriation can be organized into three categories:
So Why Are Businesses Investigating Cloud Repatriation Now?
The public cloud has been rising in popularity for around ten years now, and businesses have been steadily migrating to it over that period. This steady shift has allowed public cloud platforms like AWS and Google Cloud to become massive and hone in on profiting from this migration. They’ve also evolved their messaging to persuade organizations to make the leap, hyping up the benefits of leaving on premises infrastructure behind.
Then, the COVID pandemic hit at the beginning of 2020 and supercharged this migration process into overdrive.
IT departments couldn’t come to work, visit their data centers, or manage the resources of their now-remote workforce without a drastic change in infrastructure. There was no time for due diligence, so IT departments bought compute power in the public cloud and got their operations up and running quickly. According to a survey conducted by the Information Systems Audit and Control Association, 90% of respondents said cloud usage was higher than initially planned due to the COVID-19 pandemic.
Those who could quickly navigate this uncharted territory saw improved efficiency during those uncertain times. Now that the dust has settled and CTOs have time to dig into the numbers, many realize just how expensive those decisions have become. They may have been the right decisions at the time, but are they still the right decisions today? For a lot of businesses, the answer is “no.”
Why Are People Hesitant to Begin Cloud Repatriation?
Even realizing that the public cloud isn’t the right choice for their business, many haven’t decided to pull the trigger yet to start repatriation. There are a few reasons for this.
For one, the “deep recession” analysts have predicted for a few years hasn’t fully materialized. While some belt-tightening has happened throughout the economy, IT departments haven’t been significantly pushed to cut costs. And if there are more pressing issues to take care of, exploring repatriation options decreases in priority. So many are content with sticking with what they have until they’re backed into a corner.
Much of the hesitancy comes down to the mindset surrounding repatriation, whether that’s a feeling that repatriation is a complex process, that spending money on non-cloud infrastructure is a waste, or from rationalizing prior mistakes. Both come from a “sunk cost fallacy” mindset, where a lot of time, money, and effort has already been spent moving infrastructure to the cloud. Plus, repatriation isn’t always cheap: many cloud providers charge for egress — for both the amount of data being transferred out of the cloud as well as the speed — and many organizations don’t factor this cost into their initial migration.
A combination of these factors ultimately leads to a hesitancy to repatriate, even if doing so would be in the company’s best interest over the long term.
Here’s Why You Should Start Cloud Repatriation Anyway
Many IT professionals psych themselves out about the challenges of repatriating to on premises infrastructure and make it seem more complicated than it really is. Here’s the thing: Cloud repatriation is not a difficult process, and you already have the resources to get it done.
For one, there’s a good chance you’ve already got the on premises hardware needed to repatriate, whether it’s leftover from cloud migration or you’re still using it in day-to-day operations. When deciding whether it’s time to repatriate, it’s crucial to evaluate your current inventory to determine whether it matches the workloads you’re currently operating in the cloud.
Once you’ve completed this investigation, you’ll fully understand the workloads running in the cloud and how much space they take up. Using this information, you know what equipment your data center requires. You’ll also likely have direct proof that running those workloads on premises is cheaper since you’ll have complete control over costs.
Then, it’s time to repatriate. It’s not a heavy lift; you’ve already moved data into the cloud, and you know how to get it out. Ensure that you’re setting up your on premises hardware with the same configurations as your workloads on the cloud, and understand that it will cost some money to get your data out of the cloud due to egress fees. But once your data is out, you’re free to deactivate your cloud operations and reduce spending in the long term.
Honestly, the most challenging part of repatriation is realizing that migration wasn’t the right choice in the first place and choosing to move operations back to on premises. Think of it like ripping off a bandage — the thought of doing it hurts more than the reality of doing it. But once you’ve decided to repatriate, you can finally take full control of your costs and have more time to investigate your options should another solution come along.
To reiterate: Cloud repatriation is not a heavy lift, and once the dust has settled, you’ll have more control over your data, lower operational costs, and better performance.
Let the Experts at Roundstone Guide Your Cloud Repatriation Efforts
The best cloud repatriation is the one you don’t have to do. But getting to that point means doing your homework upfront. However, you don’t have to go it alone. Roundstone Solutions has helped organizations of all sizes in the private and public sector, from scrappy startups to global enterprises, optimize and modernize their IT infrastructure to fit their exacting needs. Want to learn more? Get in touch today.
To companies who still buy IT solutions the same way they did 20 years ago: It’s time for a change
If you weren’t already in tech, you might be surprised by how old-fashioned many companies are when it comes to buying new business IT solutions. But if you’re here, you’ve probably already butted heads over purchase decisions with more than a few folks who would seem to be happier if we kept all our data in boxes full of punch cards.
The old way to buy IT solutions started back in the ‘60s. That’s when companies outside of academia and defense departments started to see the value of computers and bought them en masse. At that point, hardware and software were seen as separate entities — aside from making sure one was compatible with the other, it made sense to track them as distinct line items. Companies would then plan their budgets based on how long they projected each piece of tech would last, using those estimates of cost versus expense to decide what they’d buy outright, lease, or finance.
This old-guard mindset is predicated on the idea that you have to commit to technology for a long time — which makes sense when you’re literally trucking it all in and dedicating rooms to mainframes (or, later, on-premises server racks). But the realities of assembling business IT solutions have evolved, and failing to pay attention often means wasted dollars.
A New Approach to Buying Business IT Solutions
It’s time to shift your mindset. Just because one process has served your company well for the last 10, 20, or even 30 years of shopping for business IT solutions doesn’t mean it still will today. We’re not proposing you throw out all the methods that you know work. We’re just asking you to re-evaluate how you know they work.
Let’s start by looking at all the IT solutions out there today. We mean all of them, not just a handful based on what your competitors and “leaders in the space” are doing. Instead of simply following the herd, evaluate each of these options according to your business needs. Sounds obvious, right? We all do this intuitively every day. But intuition doesn’t cut it when deciding what will essentially be a new nervous system for your organization.
Formalize your process. That way, you know you’re choosing the right option based on the information available to you — and you’ll be able to prove it to your bosses or investors since a well-sourced spreadsheet goes a lot further than a “good feeling.”
Not sure where to start? When we help businesses buy IT solutions at Roundstone we use a rating system that factors in business priorities as well as known and estimated expenses for a range of solutions. Here’s how we used that rating system to help three companies make better buying decisions:
From ‘All-In On the Cloud’ to Practical On-Premises
A client in the public sector came to us with a new taxpayer-facing application they planned to deploy. Management said they were going “all in” on moving to the public cloud for this application despite having quite a bit of capacity left on the infrastructure they already owned. That last part rang some alarm bells for us as soon as they brought us in.
After a discussion, we took the client through an analysis of the operational requirements of the new workload. We looked at how it would be handled in the public cloud or on-premises using their existing Nutanix deployment. While both would suffice, when we went through the financial analysis, it was clear that on-premises on Nutanix was less than half the cost of the public cloud.
Having done their homework and seeing the two alternatives clearly, the client decided to deploy on-premises with Nutanix.
Taking a Hybrid Approach
A client in the commercial sector wondered whether the public cloud would be a better place to run infrastructure than their existing, three-tier, on-premises environment. They engaged us to take them through an evaluation of all their options.
We looked at the operational considerations of their existing infrastructure, organizational factors, and costs. As a result of the evaluation, the client determined that the public cloud would best serve a small subset of their workloads. The remainder, however, were much better off on-premises.
They then asked us whether their legacy three-tier or hyper-converged infrastructure would make the most sense going forward. We’re currently working together to find the answer.
Bringing Direction to the Doldrums
A different commercial sector client came to us with a large, legacy three-tier infrastructure. They had been in business for many years, and their staff was set in their ways — they didn’t want to consider more modern alternatives. Working with their management, we explained the need for formal evaluations before deciding which infrastructure options to select.
We took the client through our evaluation process and used their numbers to compare their legacy three-tier infrastructure to a modern hyper-converged infrastructure. Both options could support the workloads, but HCI was the clear winner on simplicity and price. In fact, HCI was roughly half the price of their current setup.
The client quickly deployed Nutanix for some newer workloads. It’s now moving existing workloads over to Nutanix at an increasing rate.
Why the Old Way Keeps Hanging On
Like we said at the start, that old-fashioned approach to buying new-fashioned things is only a surprise if you’re on the outside looking in. But it’s still worth breaking down what contributes to this unproductive mindset so it’s easier to rid your own organization of any similar baggage.
This reluctance to change is simple for employees lower on the reporting chain: The manager points them toward a product and leaves no room for questions. Even if the employee suspects alternative business IT solutions would serve their purposes better, they feel too apprehensive about pushing back on these edicts to rock the boat. They keep their heads down and keep moving.
For upper management, as we’ve found many times over, it’s simply a case of herd mentality. One example: The headlines talk about a great cloud migration, so managers feel they must join in or risk their boss asking why they aren’t doing anything with all this “cloud” stuff they keep hearing about. But this type of deliberation doesn’t stop to differentiate your company from your competition, which is a fatal flaw for a business decision at any level.
Let Us Help You Find the Right Fit